Research methodology

How we code interview transcripts

Our approach to qualitative analysis transforms open-ended interview responses into structured, reliable findings. Every code assignment is evidence-based, every theme is validated, and every conclusion is defensible.

Why systematic coding matters

Interview transcripts are rich with insight, but without systematic analysis, findings become anecdotal. Two analysts reading the same transcripts can reach different conclusions. Themes can be too broad (losing nuance) or too narrow (missing patterns). And there is no way to demonstrate to stakeholders that the results are reliable.

Systematic coding solves these problems. It provides a structured, repeatable process for transforming open-ended responses into quantified findings, with built-in quality checks that ensure accuracy and consistency.

κ ≥ 0.65
Inter-rater reliability target for all coded data
5
Independent AI agents with two rounds of reliability checks
2-round
Code, resolve, re-code, validate process

Our approach: Codebook Thematic Analysis

Not all thematic analysis is created equal. Braun and Clarke (2021, 2022) identify three distinct variants, each with different strengths and trade-offs. We use Codebook Thematic Analysis because it combines the rigor clients expect with the flexibility that real interview data demands.

Reflexive TA

Highly flexible
Captures deep interpretation
No codebook
No reliability measurement
Results vary by analyst

Codebook TA

Our approach
Structured codebook with definitions
Supports inter-rater reliability
Iterative, evolving codebook
Works with AI-assisted coding
Scales to large datasets

Coding reliability TA

Highest reliability
Fixed, testable codebook
Rigid, cannot evolve
Misses unexpected themes
Requires all codes upfront

Why Codebook TA?

In applied market research, we rarely know every theme before reading the data. Reflexive TA gives us no way to prove our coding is reliable. Coding Reliability TA locks us into a fixed framework that cannot adapt. Codebook TA gives us the best of both: a structured codebook that evolves iteratively as we discover what the data contains, with reliability measurement built in.

Three types of interview questions

Different questions produce different kinds of data. Each type requires a distinct coding approach, matched to the structure of the response.

Rank-order variables

Open responses mapped to an ordinal scale with predefined buckets.

Example question
"How large is your company?"
Example response
"We're about 800 people globally"
Coded as
500-1,000 employees
Method: Directed content analysis (Hsieh & Shannon, 2005). Predefined categories with clear boundaries.

Categorical variables

Single-dimension classification into a small set of distinct categories.

Example question
"How do you feel about your current tool?"
Example response
"It does the job but there are definitely things that frustrate me"
Coded as
Mixed
Method: Directed content analysis with anchor descriptions defining each category level.

Thematic coding

Complex, open-ended responses broken into individual statements and grouped into validated themes.

Example question
"Why did you switch providers?"
Example response
"The onboarding was clunky, we had to re-enter data in three places, and honestly we were paying too much for what we got"
Coded as
Poor usability Missing features Cost concerns
Method: Full Codebook Thematic Analysis with meaning unit segmentation, two-pass coding, and theme validation.

The coding process

Our thematic coding follows a four-step process grounded in established qualitative research methods. Each step has specific rules and quality checks.

1

Segment into meaning units

Each response is broken into discrete meaning units: the smallest segment of text that contains a single idea or claim (Graneheim & Lundman, 2004). A participant who says three different things gets three separate meaning units, each coded independently.

Before segmentation
"The onboarding was clunky, we had to re-enter data in three places, and we were paying too much"
After segmentation
MU-1 "The onboarding was clunky"
MU-2 "we had to re-enter data in three places"
MU-3 "we were paying too much"
2

First-cycle coding

Each meaning unit receives a descriptive code: a short label (2-5 words) capturing what the statement is about (Saldana, 2016). Codes use the participant's own language where it is distinctive ("in vivo coding") and standardized labels where consistency matters.

"The onboarding was clunky" Poor usability
"re-enter data in three places" Duplicate data entry
"paying too much" Cost concerns
3

Theme construction (two-pass approach)

Related codes are grouped into broader themes, each organized around a single concept. We use a two-pass approach (Deterding & Waters, 2021):

Pass 1: Discovery
Read 10-20% of transcripts and identify recurring patterns. Build an initial codebook with definitions, examples, and exclusion criteria for each theme. This pass is analyst-led.
Pass 2: Application
Apply the finalized codebook to all transcripts. Every response is coded against the same definitions. AI-assisted coding handles volume; human review ensures quality.
Theme: Product usability issues
Poor usability Confusing navigation Too many clicks Duplicate data entry
4

Theme refinement

Candidate themes are tested against specific rules to ensure they are coherent, distinct, and analytically useful. Themes may be split, merged, or reorganized based on these checks.

Theme validation rules

Themes are not arbitrary groupings. Each must pass specific validation criteria before it enters the final analysis.

5% minimum frequency

A theme must be mentioned by at least 5% of participants to stand on its own. Themes below this threshold are merged with related themes or moved to "Other." This prevents findings from being driven by isolated comments.

Split when two concepts emerge

If the quotes within a theme cluster into two or more distinct ideas, the theme is too broad. A theme about "convenience" that contains both "close to my office" and "fast service" captures two different concepts and should be split for actionable analysis.

Merge at 70%+ participant overlap

When 70% or more of participants who mention Theme A also mention Theme B, the themes likely represent the same underlying concept. They are merged into a single theme to avoid double-counting and simplify the analysis.

Target 6-10 themes per question

Fewer than 4 themes for an open-ended question usually means important distinctions are being lost. More than 12 usually means themes are not abstracted enough. Sub-themes preserve nuance within the 6-10 target range.

15%

"Other" capped at 15%

If the "Other" category exceeds 15% of responses, a meaningful pattern is being missed. The uncategorized responses are reviewed to identify hidden themes that should be added to the codebook.

Sub-themes preserve detail

Broad themes work for executive summaries. Sub-themes provide the detail needed for actionable recommendations. A theme like "Value for money" (21%) might contain sub-themes for "Low absolute prices" (14%), "Deals and promotions" (8%), and "Portion value" (5%).

The 5-agent reliability system

Every coded dataset passes through a two-round, five-agent process that produces measurable evidence of coding accuracy. Five independent AI agents code, cross-check, and validate every finding. No single agent's judgment is trusted in isolation.

Why multiple independent agents?

A single AI coder, no matter how accurate, provides no way to measure reliability. The same data could be coded differently by a different system, and there would be no way to know which is correct. Multiple independent agents solve this by replicating the gold-standard practice of inter-rater reliability from human qualitative research (Cohen, 1960; Krippendorff, 2004), but without the time, cost, and fatigue limitations of human coders.

Three agents would suffice for basic reliability measurement. We use five because the second round of validation catches residual errors in the first round's resolution, improving accuracy from approximately 85-88% to 90-93%. For consulting engagements where findings drive significant business decisions, that incremental precision matters.

Agent configuration

Independence between agents is not automatic. Two identical AI systems given identical inputs will produce identical outputs, proving nothing. We design genuine independence into each agent using four levers: model architecture, temperature, persona framing, and codebook emphasis.

Agent Model Temperature Persona Codebook emphasis Role
1 Opus 0 Thorough, inclusive Inclusion criteria first Primary Coder A
2 Sonnet 0.2 Conservative, precise Exclusion criteria first Primary Coder B
3 Sonnet 0 Neutral arbiter Balanced Round 1 resolver
4 Sonnet 0 Balanced, fresh perspective Balanced Independent re-coder
5 Opus 0 Senior quality reviewer Balanced Final validator

How we ensure genuine independence

Different model architectures

Agents 1 and 5 use Opus (deeper reasoning). Agents 2, 3, and 4 use Sonnet (different architecture). Different model weights produce genuinely different coding judgments on ambiguous cases.

Different personas

Agent 1 leans toward including borderline cases. Agent 2 leans toward excluding them. This creates productive tension: where both agree despite opposite biases, confidence is very high. Where they disagree, it surfaces genuine ambiguity.

Temperature variation

Agent 2 operates at temperature 0.2, introducing slight randomness on borderline decisions. This mirrors the natural variation between human coders without degrading accuracy on clear-cut cases.

Codebook emphasis

Agent 1 sees inclusion criteria first for each code. Agent 2 sees exclusion criteria first. This creates different cognitive anchoring without changing the actual rules.

Transcript order

Agent 4 processes transcripts in reverse order. Earlier transcripts subtly influence how coders interpret later ones. Reversing the order eliminates this bias.

Context isolation

Agent 4 has no knowledge of round 1 results. It codes the full dataset from scratch, providing a completely uncontaminated second opinion.

The two-round process

Round 1 Independent coding + resolution
1 2
Agents 1 and 2 code all segments independently

Neither sees the other's work. Both produce codes with written reasoning for every assignment.

κ
Calculate Cohen's Kappa per code

Measure agreement between Agent 1 and Agent 2, corrected for chance. Identify every disagreement with both agents' reasoning.

3
Agent 3 resolves all disagreements

Reviews both agents' reasoning against the codebook definition. Picks the correct code. Flags ambiguous definitions for human review.

Round 2 Independent validation + final resolution
4
Agent 4 codes all segments independently

No knowledge of round 1. Different model persona. Reverse transcript order. A completely fresh perspective.

κ
Compare Agent 3's resolved codes vs. Agent 4's codes

Segments where they agree are auto-finalized (double-confirmed). Disagreements proceed to Agent 5.

5
Agent 5 makes the final call

The most capable model reviews the hardest cases: both rounds' reasoning, the codebook, and the original text. Its decision is final.

The codebook: precision in every definition

The codebook is the single most important factor in coding quality. Research shows that codebooks with full definitions, inclusion/exclusion criteria, and example quotes improve coding accuracy by 15-25 percentage points compared to code labels alone (Pangakis, Wolken, & Fasching, 2023). Each theme entry includes five components:

Example codebook entry
Theme name
Poor post-sale support responsiveness
Definition
Participant describes slow response times, unanswered tickets, difficulty reaching a real person, or long resolution times after becoming a customer.
Include
Any mention of delayed support responses, unresolved issues, phone trees, or being "passed around" between departments.
Exclude
Pre-sale experience, onboarding difficulties, product bugs. These are separate themes.
Example quote
"We submitted a ticket about a payroll error and didn't hear back for two weeks."

Measuring inter-rater reliability

Inter-rater reliability measures whether independent coders assign the same codes to the same data. We use Cohen's Kappa (κ), which corrects for chance agreement (Cohen, 1960). Kappa is calculated per code, because some codes are inherently harder to apply consistently than others.

< 0.20
0.21-0.40
0.41-0.60
0.61-0.80
0.81-1.00
Poor Fair Moderate Substantial Almost perfect
Our minimum threshold: κ ≥ 0.65

Scale: Landis & Koch (1977). Threshold based on Krippendorff (2004) recommendation of α ≥ 0.667 for applied research.

Full audit trail

Every code assignment includes written reasoning from each agent that evaluated it. This creates a chain of evidence from the participant's words to the final theme, making every finding traceable and defensible.

Participant said
"We submitted a ticket about a payroll error and didn't hear back for two weeks."
Agent 1 (inclusive)
Participant describes a specific support ticket unanswered for an extended period. Matches "slow response times" and "unanswered tickets." Poor post-sale support
Agent 2 (conservative)
Explicit mention of an unresolved ticket with a specific timeframe (two weeks). Clear match to codebook definition. Poor post-sale support
Status
Both agents agree. Auto-confirmed.

Methodological foundations

Our approach is grounded in established qualitative research methods, each backed by decades of peer-reviewed evidence.

Braun, V. & Clarke, V.
2006
Using thematic analysis in psychology
Qualitative Research in Psychology, 3(2), 77-101
Foundational framework for thematic analysis. One of the most cited methods papers in social science.
Braun, V. & Clarke, V.
2022
Thematic Analysis: A Practical Guide
SAGE Publications
Distinguishes three TA variants (Reflexive, Codebook, Coding Reliability) and provides updated guidance for each.
Graneheim, U.H. & Lundman, B.
2004
Qualitative content analysis in nursing research: concepts, procedures and measures to achieve trustworthiness
Nurse Education Today, 24(2), 105-112
Standard framework for segmenting transcripts into meaning units and establishing coding trustworthiness.
Saldana, J.
2016
The Coding Manual for Qualitative Researchers
SAGE Publications, 3rd edition
Defines first-cycle and second-cycle coding methods, including descriptive, in vivo, and process coding approaches.
Deterding, N.M. & Waters, M.C.
2021
Flexible coding of in-depth interviews: a twenty-first-century approach
Sociological Methods & Research, 50(2), 708-739
Recommends the two-pass coding approach: initial indexing pass followed by focused coding pass.
Hsieh, H-F. & Shannon, S.E.
2005
Three approaches to qualitative content analysis
Qualitative Health Research, 15(9), 1277-1288
Defines directed content analysis for coding open-ended responses into predefined categories (rank-order and categorical variables).
Cohen, J.
1960
A coefficient of agreement for nominal scales
Educational and Psychological Measurement, 20(1), 37-46
Introduces Cohen's Kappa, the standard inter-rater reliability metric correcting for chance agreement.
Krippendorff, K.
2004
Content Analysis: An Introduction to Its Methodology
SAGE Publications, 2nd edition
Defines Krippendorff's Alpha and establishes reliability thresholds (α ≥ 0.667 for tentative, α ≥ 0.80 for firm conclusions).
Landis, J.R. & Koch, G.G.
1977
The measurement of observer agreement for categorical data
Biometrics, 33(1), 159-174
Establishes the standard interpretation scale for kappa values (slight, fair, moderate, substantial, almost perfect).
Gao, J. et al.
2024
CollabCoder: a lower-barrier, rigorous workflow for inductive collaborative qualitative analysis with large language models
Proceedings of the ACM on Human-Computer Interaction (CSCW)
Demonstrates that human-AI collaborative coding reduces coding time by ~50% while maintaining intercoder reliability.
Pangakis, N., Wolken, S. & Fasching, N.
2023
Automated annotation with generative AI suggests promising avenues for qualitative research
arXiv preprint
Shows that structured codebook prompts with definitions and examples improve AI coding accuracy by 15-25 percentage points.
Miles, M.B., Huberman, A.M. & Saldana, J.
2014
Qualitative Data Analysis: A Methods Sourcebook
SAGE Publications, 3rd edition
Comprehensive reference for qualitative coding methods and quality standards, including the 0.80 threshold on 95% of codes.