Loading...
Loading...
Key terms used throughout the Interview Design Academy. Each entry notes which lessons cover the concept.
A common approach that seems reasonable but produces poor results. In interview design, examples include the 'just in case' study with too many questions, or mixing multiple goals into one study.
An ordering principle where open-ended, exploratory questions come first and closed-ended or feature-specific questions come later. This prevents earlier questions from priming or contaminating open responses.
A question with a fixed set of answer options (e.g., ratings, rankings, yes/no). These should come after open-ended questions to avoid priming.
The mental effort required to process information. In voice interviews, participants must understand the question, retrieve a memory, form an opinion, and articulate speech, all in real time. Long or complex prompts increase cognitive load and degrade answer quality.
Used in: Lesson 2
A question that asks two things at once (e.g., 'What did you like and what would you change?'). Participants usually answer only one part, making it impossible to know which objective the response addresses. Split into separate questions instead.
A secondary question triggered after the participant answers a main question. Follow-ups add depth on a specific topic without adding length to the core question list. ReadingMinds supports up to 3 follow-ups per main question.
A short phrase at the end of a question that tells the participant what kind of answer is helpful, without leading them toward a specific answer. Examples: 'One sentence is fine,' 'Start with the main reason,' 'Focus on the first moment you hesitated.'
Wording that nudges a participant toward a preferred answer. Includes phrases like 'Don't you think…,' 'How great was…,' or 'Most people say….' Even subtle leading phrasing can bias responses and reduce data quality.
Used in: Lesson 3
A core question in your study guide that maps directly to one objective. Strong studies use 5–10 main questions. Each should be self-contained, speakable in one breath, and answerable without follow-up.
A specific, user-centered sub-question derived from the research goal. Good objectives describe evidence you can collect from what participants say (e.g., 'Identify the top 3 points of confusion on the pricing page'). Each objective maps to one or more main questions.
Keeping your question short enough to be spoken in one or two breaths. It matters because participants hear the question once in real time; long prompts overload working memory, and people forget the beginning by the time they reach the end.
Used in: Lesson 2
The rule that each question should contain exactly one ask. If your question has two verbs or two 'ands,' it is probably two questions. Splitting them gives cleaner data and reduces participant confusion.
A question that allows participants to respond in their own words without fixed answer options. In voice interviews, these are especially powerful because participants naturally elaborate. Place them before closed-ended questions to avoid priming.
When earlier questions influence how people answer later ones. Asking about specific features first primes participants to mention those features in subsequent open-ended responses, meaning you hear your own agenda reflected back instead of authentic reactions.
Wording that assumes a condition is true before the participant has confirmed it. 'What confused you about onboarding?' assumes confusion happened. Fix it by adding 'if anything' or rephrasing to be open-ended.
Used in: Lesson 3
When exposure to one idea makes related ideas more accessible. In interview design, asking about specific topics early can contaminate open-ended responses later. The fix is broad-to-specific sequencing.
A brief follow-up prompt designed to dig deeper into a specific answer (e.g., 'What happened next?' or 'Can you give me an example?'). Probes add depth without adding length to the core question list.
Used in: Lesson 1
A structured review of every question in your study before launch. Includes checking for double-barrels, leading language, missing time anchors, unclear guardrails, and objective mapping. If a question fails three or more checks, consider cutting it.
Used in: Lesson 4
The broad business question your study is trying to answer (e.g., 'Improve website conversion for our new pricing page'). A single research goal gets broken into 3–5 specific objectives, which then become interview questions.
Used in: Lesson 1
The complete framework for your interview in ReadingMinds. It links a research goal to objectives, then to core questions and follow-up probes. The Study Guide is your blueprint: it forces every question to earn its place.
A phrase that grounds a question in a specific time frame (e.g., 'the last time you…,' 'in the past week'). Time anchors improve recall accuracy and make responses comparable across participants. Avoid vague time references like 'recently.'
A four-part structure for writing effective voice interview questions: (1) Time anchor: 'Think about the last time...,' (2) One clear ask: 'What made you decide to...?,' (3) Guardrail: 'Give one example,' (4) Optional neutral follow-on: 'What happened next?'
Used in: Lesson 2
Simplified definitions of key market research terms, from brand awareness and conjoint analysis to AI-moderated interviews and voice of customer.
Evaluate how your target audience responds to early advertising ideas before full production. Participants view storyboards or rough video sketches and answer questions about relevance, attention grabbing, and brand impression.
The creative development of concepts and ideas for advertisements that communicate the brand’s value proposition. It aligns with CX by crafting messages that resonate emotionally and reduce friction in customer engagement.
Semi-structured interviews run by an AI agent that can ask open or closed questions and follow up dynamically, saving time and enabling scale.
Measures how many people recognize a brand only when shown a prompt or sample ad. After asking people to recall brands spontaneously, researchers show a list and ask which they recognize.
The process of segmenting and analyzing target audiences based on demographics, psychographics, behaviors, and preferences. It supports CX by ensuring messaging and ads are tailored to enhance customer interactions at various touchpoints.
AI-powered conversion of spoken words into written text during or after interviews, speeding up analysis and highlighting key quotes without manual effort.
In a traditional focus group, observers sit behind a one-way mirror to watch without influencing participants. In online research, the ‘back room’ is the digital space where stakeholders can view interviews and chat privately with moderators.
Study of psychological and emotional factors influencing consumer decisions, revealing why people make irrational choices in market scenarios.
Processes for spotting and reducing unfair bias in data or models. Crucial for keeping AI scores and recommendations inclusive and defensible.
How familiar consumers are with a product or company name and the positive associations that come with it. High brand awareness means customers can recognize the name and recall positive impressions when making purchasing decisions.
A framework that tracks customer progression through stages of awareness, consideration, purchase, and loyalty. It informs CX by identifying where customers drop off or engage, helping optimize touchpoints for better experiences.
The degree to which customers continue to buy a preferred brand over alternatives, even when competitors offer lower prices. Loyal customers are more profitable because it costs less to retain them than to attract new ones.
Gathers feedback to understand how customers perceive a brand’s reputation, visuals and messages, and compares them to competitors. It helps identify the brand’s strengths and gaps in the marketplace.
The creative process of generating and brainstorming ideas for new products or features, often based on consumer insights, market gaps, or brand goals. It aligns with NPD to create approaches that enhance CX by addressing customer needs.
The evaluation of campaign ideas or creative concepts through consumer feedback, focus groups, or A/B testing. It ensures campaigns contribute to a seamless and delightful CX by validating effectiveness before launch.
A quantitative technique used to understand how consumers value different features of a product or service. Respondents rank or choose between options with varying attributes, allowing researchers to identify which features drive preference and willingness to pay.
Tests whether shoppers confuse one brand or manufacturer with another. By measuring brand mix-ups, companies learn where to clarify their messaging or differentiate better.
Recognizes that buying goods is not just about utility or price; it reflects shared beliefs, values and customs. Mapping these meanings is a key goal of qualitative research.
The study of how consumer behaviors, attitudes, and preferences evolve over time due to social, economic, or technological factors. It informs CX by highlighting how these shifts affect interactions with the brand across touchpoints.
In an experiment, the control group does not receive the new treatment and acts as the baseline. Comparing results between the control and test groups helps isolate the effect of the intervention.
A method of organizing survey data into a two-dimensional table to compare how different groups answered a question. Crosstabs help uncover relationships or patterns between variables.
Encompasses the entire journey of a customer with a brand across all touchpoints, including pre-purchase, purchase, and post-purchase interactions, aiming to identify friction and create delightful moments.
The sequence of experiences a customer goes through from first learning about a brand to becoming a loyal advocate. Mapping this journey helps businesses improve touchpoints and remove friction so the experience feels seamless.
A panel of respondents who log their buying, watching or listening habits over time. Their diaries provide rich, longitudinal insights into product use and media consumption.
A genAI-based model of a particular individual that can be used to predict both individual and population-level preferences and behaviors. Use cases include filling in missing data, shortening surveys, standing in for hard-to-recruit groups, journey orchestration, and identifying usability hurdles.
A qualitative session with two participants. They can be a known pair (e.g., a parent and child) or strangers with similar or opposing views; the interaction reveals shared and divergent attitudes.
The initial phase of identifying emerging patterns, behaviors, or preferences among consumers through research, data analysis, or observation. It focuses on spotting potential trends before they become mainstream.
Algorithms that read tone, facial cues, or phrasing to gauge feelings. They can spot real enthusiasm versus neutral polite indifference in how respondents speak.
Tools used to measure employees’ satisfaction, motivation, and commitment to their roles and the organization. While not directly customer-facing, they impact CX because engaged employees deliver better interactions.
Researchers immerse themselves in participants’ real-world settings to observe behaviors and culture. By watching people in their homes or workplaces, ethnographers uncover unspoken routines and product interactions.
Using cameras and infrared sensors to monitor where and how long participants look at elements on a screen or package. Tracking eye movements reveals which design elements capture attention or cause confusion.
Products that use voice or video and transcribe to text, then feed that text into an LLM for analysis. This fragmented speech-to-text process flattens what customers said and loses emotional depth.
A moderated discussion with a small group (usually 6–12 people) who share key demographics. The moderator encourages interaction to explore opinions, emotions and reactions to concepts or products.
AI systems that produce new content like text, images or questions, aiding in creating dynamic interview scripts or simulating participant responses for testing.
A visual summary of eye-tracking data that shows which parts of a webpage, package or ad attracted the most attention. Areas with more fixations appear in red, while cooler colors indicate less attention.
An edited compilation of the most telling video clips from interviews. Highlight reels capture participants’ words and non-verbal cues to convey insights quickly to stakeholders.
A one-on-one interview where a skilled interviewer explores a participant’s motivations, beliefs and feelings in depth. Unlike focus groups, IDIs avoid group dynamics and allow for deeper probing.
A reward (often a digital gift card or prepaid card) given to participants for their time and honest feedback. Incentives boost participation rates and show appreciation.
A distilled, actionable finding from one or more responses. It reveals what matters most to your customers: emotionally, behaviorally, or thematically. This includes surfaced themes, trends, emotional tone, and quotable moments.
A lightweight recruitment entry point that invites participants into an AI interview. It can be embedded in an app, website, email, or QR code, allowing participants to opt in on their own time.
An AI-moderated conversation between the platform and one participant. Interviews are voice, text, or video and typically last around 10 minutes. Emotionally aware AI moderates the interaction with real-time analysis.
A connected web of entities (products, needs, personas) and their relationships that lets AI surface ‘unknown knowns’ across siloed research files; your insights team’s digital brain.
An analysis method widely used in linguistics and qualitative research that tallies word occurrences and displays each with its surrounding context. Valuable for examining language patterns, themes, and word usage.
A qualitative method that uses probing questions to link product attributes to functional benefits and then to personal values. Asking why someone buys organic food might reveal deeper motivations like health or environmental values.
A massive neural-network model trained on large volumes of text that can read, write, and converse like a human. LLMs sit under the hood of AI moderators and can ask follow-up questions on the fly.
A common survey scale that asks respondents to rate their agreement with a statement on a five- or seven-point scale (e.g., from ‘strongly disagree’ to ‘strongly agree’). Named after American social psychologist Rensis Likert (pronounced LICK-ert), who developed it in his 1932 Ph.D. dissertation.
Research where data is collected from the same participants repeatedly over an extended period. This design tracks changes and trends over time, making it useful for studying behavior shifts or product use patterns.
The practice of gathering and analyzing information about customers and markets to guide business decisions. Traditionally conducted via focus groups and surveys, it increasingly uses online tools with automated transcripts and analytics.
A pre-recruited group of people who agree to take part in ongoing research, such as surveys or interviews. Panels enable longitudinal studies and provide quick access to targeted audiences.
Maximum Difference Scaling (also known as Best-Worst Scaling). A survey-based research method used to measure and rank relative preferences or importance among a list of items, attributes, or options.
The analysis of how target audiences engage with various media channels (e.g., social media, TV, online platforms). It informs CX by optimizing campaign touchpoints to align with customer preferences and habits.
An approach that combines quantitative and qualitative research to provide a more complete understanding. For example, running a survey to measure satisfaction and then conducting interviews to explore the reasons behind the ratings.
The person (or AI) who guides a discussion. Moderators introduce topics, encourage participation, keep the conversation on track and ensure the research objectives are covered.
AI capability to interpret and generate human language, enabling seamless conversation in AI-moderated interviews and accurate sentiment detection.
A single-question metric that asks customers how likely they are to recommend a company, product or service to others. Respondents rating 9–10 are ‘promoters,’ 7–8 are ‘passives’ and 0–6 are ‘detractors.’ The NPS is calculated by subtracting the percentage of detractors from promoters.
The process of researching, ideating, and validating new product concepts to meet customer needs, aligning with the brand’s overall CX strategy by identifying opportunities to enhance user satisfaction.
An external group of people who participate in multiple studies over time.
The process of finding and screening participants for research studies. Recruiters aim for a diverse pool that matches the target demographics and behaviors.
A person who participates in a research project. ‘Participant’ reflects a more collaborative relationship than the older term ‘respondent.’ In the context of ReadingMinds, a participant is an end user or customer who takes part in an interview.
A metric derived from tone and sentiment signals to gauge how eager or reluctant a respondent sounds about making a purchase or commitment; higher scores indicate greater buying intent.
The strategic process of defining how a new product or brand is perceived in the market relative to competitors. It ensures the product aligns with CX goals by creating a clear, compelling identity that resonates with target customers.
Collecting new data directly through methods like surveys, interviews or observations. Primary research provides first-hand insights tailored to your specific questions and complements secondary research.
A high-level research objective. It organizes one or more studies around a strategic business initiative or theme, such as understanding why customers don’t convert or testing early messaging.
The art (and science) of wording very clear and detailed instructions so a GenAI tool gives you exactly the insight you need.
The rhythm, intonation and melody of speech that convey emotion and meaning beyond words. Recognizing prosody helps AI systems interpret how something is said, not just what is said.
An open-ended conversation where respondents speak freely, allowing the interviewer to probe for depth. Such interviews yield richer insights than multiple-choice surveys.
A research approach that seeks to understand how people experience the world. It uses methods like observations, interviews or focus groups to capture the depth and nuance of human behavior and motivation.
The collection and analysis of numerical data to identify patterns, test hypotheses or make predictions. Structured surveys with fixed-response questions, experiments and statistical models are all examples.
Instant processing of interview data as it unfolds, allowing AI to adapt questions or flag emerging themes for immediate stakeholder review.
A short questionnaire or script used to qualify participants for qualitative research. Screeners ensure that only people who meet the study’s criteria are invited.
The blueprint for a study. It defines objectives, research methods (qualitative or quantitative), sample, location, interview guide and tasks. A thoughtful design ensures the project answers the client’s questions effectively.
A person who participates in a research project. Many researchers now use ‘participant’ to reflect a more collaborative relationship.
The data record for a single completed interview. It includes raw input (voice/text/video), sentiment analysis, extracted themes, key moments, and more.
Refers to the quality, reliability and thoroughness of a study. Rigorous research follows sound procedures and careful analysis to ensure trustworthy results.
Selecting a smaller group from a larger population to represent the whole. When sampling is done correctly, researchers can generalize findings to the entire population without studying everyone.
Advanced AI models that use automatic speech-to-speech (S2S) recognition technology to capture tone, prosody and emotion in real time. This allows AI interviewers to adapt their questions on the fly and provide granular sentiment analysis, as opposed to first-gen products which flatten voice into text.
Analyzing existing data collected by others, such as academic papers, government statistics or industry reports. Combining secondary research with primary research provides a fuller picture of the market.
The process of determining the emotional tone of text or speech. It categorizes responses more granularly than simple positive/neutral/negative and is vital for understanding customer opinions.
The process of monitoring and analyzing customer emotions, opinions, and attitudes toward a brand over time, often through social media, reviews, or surveys. It supports CX by pinpointing areas for improvement or delight.
Deep understanding of consumer motivations, behaviors, and decision-making processes during the purchasing journey. It informs NPD and CX by identifying opportunities to optimize product offerings.
The process of segmenting transcriptions to identify ‘who spoke when.’ Assigning unique labels to speakers improves transcription clarity in multi-speaker interviews and conversations.
Visual, verbal or audio materials used to spark discussion or communicate ideas in research sessions. Examples include storyboards, prototypes, images, videos, or clickable prototypes shown to participants during an interview.
A self-contained research activity that includes an interview design, participant engagement, and analysis. It encompasses an AI-generated or co-designed interview guide, distribution settings, response collection, and real-time analysis.
Fake users generated by AI from group-level descriptors, used to make population-level predictions. While there may be a few use cases (pre-flighting stimuli, building digital twins of hard-to-reach segments), user research ultimately needs real users.
The creation of concise, memorable phrases that encapsulate the brand or product’s essence. It contributes to CX by reinforcing brand identity and creating positive positioning across customer touchpoints.
The specific group of people most likely to buy your product or service. Defining your target audience by demographics, interests or behaviors allows for more focused marketing and product development.
A named group of users within a workspace. Teams are used for collaboration, ownership, and sharing, helping manage users across organizations who can see and contribute to different research efforts.
Systematic review of qualitative data to identify recurring themes or patterns, accelerated by AI to handle large volumes efficiently.
A deeper investigation into identified consumer trends to understand their implications, scope, and potential impact on the brand. This involves analyzing market data, cultural shifts, and consumer feedback.
Sessions with three participants. They combine the depth of one-on-one interviews with some group dynamics, allowing researchers to test how differing loyalties or opinions play out.
Participants perform tasks with a product or service while researchers observe. Feedback on ease of use and problems informs design improvements.
A bite-sized scenario that spells out who will use an AI research method, what they’ll do, and why it matters to the business.
Focuses on a user’s interaction with a specific product, system, or interface, emphasizing usability, accessibility, and satisfaction within that context.
Collection of direct feedback from customers on their experiences and expectations, analyzed to drive product enhancements and satisfaction.
The top-level container for an organization. It holds all projects, studies, interviews, users, and insights. Workspaces are typically mapped to a company or brand and provide separation, clean analytics, and role-based access control.
The Academy curriculum draws on the following sources. Primary and official sources were prioritized; internal sources were used to align the curriculum to ReadingMinds product workflow.
Social desirability
The tendency for participants to give answers they think are socially acceptable rather than truthful. In voice interviews, this effect is stronger because the AI interviewer feels more present than a text form. Counter it with nonjudgmental wording and explicit permission to be negative.
Used in: Lesson 3