Summary:
Well-written screeners ensure that your study participants are appropriate for your research goals, improve data quality, save resources, and reduce bias.
There is no worse feeling for a researcher than realizing in the middle of a research session that the data is invalid because the participant is a bad match. Avoid this disaster by using a screener to recruit the right participants for your study.
What Is a Screener?
Conducting user research is one of the most effective ways to ensure that the solution you’re designing will serve the needs of the end users. However, it works only if you can find the right test participants.
By “right test participants,” I mean actual users of your product or representative users who mimic the demographic and behavioral characteristics of your users. Using a screener is the best and easiest way to ensure that you are conducting research with the right users.
A screener is a set of questions, typically delivered verbally or via written questionnaire, used to establish that a prospective participant is a good fit for a given research project.
Benefits of a Research Screener
Utilizing a research screener every time you conduct user research will yield several valuable benefits.
Screeners Ensure Relevance
You want to be designing for the right audience. Whether you are designing an airplane cockpit or a user dashboard for a mortgage-lending app, the content and functionality should be customized to suit your end users’ unique needs and preferences.
For that reason, it is almost never appropriate to conduct testing with “just anyone,” as the results of that research will not be relevant to the product in question. Screeners ensure that users involved in research are relevant to your product and research questions, which, in turn, ensures that the findings are valid.
Screeners Ensure Data Quality
Some individuals are excellent research participants. They are forthcoming with their thoughts and opinions, skilled at articulating complex reactions, and eager to help you find answers to your research questions. Others are not. A participant who provides only one-word responses, shrugging their way through interviews and speeding through tasks in order to reach the finish line as quickly as possible will not provide you with valuable or high-quality data. A well-designed screener can weed out some of these participants.
Screeners Save Time and Money
Without a screener, a researcher will find out if the participant is right for their study only after the study session has started. A well-designed screener prevents wasted time and resources.
Screeners Reduce Bias
Without an effective screener, research is open to various forms of bias. For example, the audience of people who voluntarily join a research panel (like those accessible through UserTesting.com or User Interviews) tends to include many IT professionals and other individuals who are significantly more web-savvy than the average person. Testing exclusively with these participants can lead to bias and yield incorrect insights.
Best Practices for Creating Effective Screeners
The following best practices ensure a representative participant sample in your user research.
1. Define Inclusion and Exclusion Criteria
The first step in creating any research screener is to define your goals. Who do you want to include in your research? And, just as importantly, who do you not want to include? Think of both inclusion criteria (factors that would make someone a good fit for a study), and exclusion criteria (disqualifying factors)
The more narrowly you define your participant pool, the harder it will be to find the right people, so focus on the attributes that are critical to getting reliable answers to your research questions.
Typically, behavioral and demographic characteristics will be the most important to consider.
- Behavioral characteristics: Consider your target users. What behaviors do they have in common? What do they do? What are their common interests and passions? What are their shopping or media-consumption habits? Make a list of the behavioral traits most relevant to your research questions.
- Demographic characteristics: Demographic characteristics are typically less important than behavioral characteristics but are still worth considering. If there are demographic traits that describe your target audience (e.g., senior citizens), add those to your list as well. Additionally, in studies that test a particular user interface, we often exclude people working in UX, marketing, or IT (as these people could be experts at analyzing user interfaces and, thus, not representative of your target users).
Other factors to keep in mind include:
- Technology ownership and usage: Are you planning on testing a native Android app? A Mac app? A Chrome extension? Recruit participants who regularly use the technology or device in question.
- Past research participation: Using the same research participants in multiple studies might lead to biased results. (There are a few exceptions — for example, if you are following the behaviors of the same cohort over time.) Additionally, avoid using “professional testers” — people who make their living by participating in research studies. These people are frequently found in research panels and tend to behave differently than the general population.
2. Focus on Past Behaviors, Not Predictions
When conducting user research, we often want to find people who would use a product or service that does not currently exist. It can be tempting, therefore, to ask directly, “Would you use product X?”
Avoid this temptation. Users are notoriously unreliable at predicting their future behaviors. They may be inaccurate for a number of reasons, including a genuine misestimation, an attempt to flatter the researcher, or an attempt to lie their way into a paid research opportunity.
Instead, focus on relevant past behaviors. Rather than asking whether they would use a product in the future, ask whether they’ve used a similar product in the past.
3. Avoid Yes/No Questions
Many prospective participants want to be part of the study and may be willing to lie to get there. They may try to guess the purpose of the study and answer your screener so that they seem to match it.
In particular, the intent behind Yes/no questions is usually easy to guess.
For example, imagine you are recruiting Amazon shoppers for a study. Consider the following suboptimal screener question:
❌ Do you shop at Amazon?
A prospective participant could reasonably assume that researchers are recruiting Amazon shoppers, and respond Yes, whether or not that was honest.
Consider this alternative series of questions.
How many times have you shopped online in the past month?
- 0 times (reject)
- 1-5 times (proceed to Q2)
- 6-10 times (proceed to Q2)
- 11+ times (proceed to Q2)
Which of the following sites have you used to shop online in the past month? (select all that apply)
- Zappos (may select)
- eBay (may select)
- Shein (may select)
- Amazon (must select)
- Temu (may select)
- None of the above (reject)
With such two-question series, it is much more difficult to game the screener and falsify answers.
4. Avoid Leading Questions
Leading questions favor a particular response and, thus, produce inaccurate data. In screeners, they can also reveal the purpose of the research, and participants may use this information to consciously game the screener (and, if recruited, to even modify their behavior during the study to match their mental model for the study) .
Consider leveraging the funnel technique to avoid inadvertently revealing the study purpose. Start with the broadest, least revealing questions before asking specific questions that may reveal some of the researchers’ intent.
❌ Leading Question |
✅ Funnel Technique |
How often do you use our app’s new health-tracking features to improve your fitness?
|
What apps or tools do you currently use to track your fitness or health?
Which features in these apps do you find most helpful for tracking your progress?
Have you used our app’s health-tracking features? If so, how have they worked for you? |
5. Consider Including a Few Open-Ended Questions
While you should be cautious about overusing open-ended questions in surveys because they can increase the time and effort required from respondents, they serve a valuable purpose in a screener.
Open-ended questions have two benefits. First, open-ended questions provide rich responses, giving the researcher a better picture of the respondents and enabling them to select good fits.
Second, open-ended questions can be an indicator of the effort that participants will apply to testing. If a participant responds to open-ended questions with nondescript, single-word responses, they might do so in a research setting, too.
6. Pilot Your Screener
As with any survey, always pilot your screener prior to launching it. Testing your screener with a few individuals will typically catch a few issues ranging from typos, confusing/ambiguous wording, and issues with skip logic.
Common Mistakes
Overscreening
It can be tempting to search for the perfect participant who matches every single behavioral and demographic characteristic of your target audience.
In most cases, doing so will result in an overly restrictive screener that nobody will be able to pass. Remember, screen only for the criteria likely to impact your research questions.
Use of Jargon or Overly Complex Language
You and your colleagues are not the target audience for your screener. Avoid the use of jargon or other potentially confusing language.
❌ Do you have experience conducting ethnographic research for product ideation?
✅ In the past 12 months, how many times have you observed or interviewed people in their real-life settings to get ideas for new products?
Ignoring Diversity in Demographics
If you test only with able-bodied and neurotypical people who are tech-savvy and literate, you will likely be overlooking issues that affect a significant portion of the population.
Consider screening for and including participants who are disabled, neurodiverse, have low literacy or low tech-savviness in order to ensure your product will be usable by your entire target audience,
A Note on Synchronous Screeners
Most screeners are delivered via a written, online survey. These are convenient, as they can easily be linked to from websites, emails, online ads, and social-media posts. Written screeners are often sufficient to filter out participants who don’t meet basic criteria.
However, a second screening step over the phone can be highly beneficial, particularly for questions that might reveal the purpose of the study or when assessing qualities that are hard to capture in writing. Since all questions in a written screener are typically visible upfront, participants may guess the purpose of the study and tailor their responses to fit. This can lead to biased participant selection.
Over-the-phone screeners provide an opportunity to ask potentially revealing questions in a more conversational, indirect manner, reducing the chances that participants can guess the study’s intent and manipulate their answers.
Additionally, over-the-phone screeners offer the advantage of assessing the participant’s communication style, which might affect the success of a research session. For example, if you need participants to clearly articulate their thoughts or provide detailed verbal feedback, a phone screener can ensure that a participant’s communication style is a good match for the study.
Keep phone screeners very short (2–5 minutes), as you will typically not compensate respondents for their time. Spend that time having them elaborate on their responses provided in the written screener and using your judgment to determine honesty and fit.