Plan
Planning ensures that everyone’s time is respected throughout the research process, and helps the team adapt its approach in response to the real world.
Writing a research plan
A research plan (sometimes also called a research protocol), describes the design of your research. Typical 18F research plans include:
- Background
- Goals
- Research questions
- Methods
- Team participation
- Timeline
- Participants and recruiting
- Ethics considerations
- Outputs and outcomes
18F maintains a research plan template ( 18F only, Research plan in Google Doc). Your research plans do not have to follow this template. What’s important is that you create a plan at all. Research planning helps you and your team:
- Openly commit to learning more about the problem(s) at hand
- Agree on which information is most useful for informing future decisions
- Learn about design research itself
- Encourage reflective practice (for example by reviewing how well the plan matched reality)
Background
Describe factors that the research will need to account for, including any shared beliefs or forces motivating the research itself. Review and summarize any relevant secondary sources (like websites, reports, case studies, presentations); or link to prior research plans or earlier versions of the concepts you’re testing.
Goals
Design research is fundamentally about reducing risk and informing decisions. When writing your goals, use verbs that specify the output like “describe,” “evaluate,” “quantify,” or “identify.” Avoid vague words like “understand” or “explore.” Example goals could include: “describe user goals and pain points,” or “identify and evaluate the hypothesis behind our proposed design.”
Make sure your goals work towards concrete positive change for your audiences. Poorly-written goals can reinforce structural problems or only address surface level issues.
Research can also have subgoals. For example, some agencies choose to work with 18F to learn more about our approach. Explicitly stating these kinds of subgoals helps provide an honest account of the coaching work that the team will undertake alongside the research itself.
Everyone on the team should agree on the research goals. Clarifying research types is a useful starting point for this conversation.
Ethical considerations
Research affords your team powerful opportunities to interact with people and to explore what’s possible. While 18F’s UX team agrees on our own ethical principles for design research, these are just our own. Discuss and clarify ethical principles with your team and your partners. Note any ethical dilemmas or concerns. Identify potential hazards in your product/service and ensure that your research will not bring harm to participants.
Next, engage your team in a conversation about bias. Bias is always present in research, but you can help mitigate it by discussing the types of bias we actively work to mitigate. Power dynamics are always at play when people interact with government. As a researcher in the federal government, be aware that people’s willingness to share may change depending on their level of trust as we discuss further on our blog in government.
Research questions
Research questions are high-level questions that reflect what you want to learn to make better evidence-based decisions. Research questions are different from interview questions. Research questions should be relevant, actionable, and practical. They should also be ethical: consider whether answering your research questions would put participants in a compromising position. For example, studying the degree to which participants adhere to a law or policy enforced by the researcher’s own office or institution could jeopardize participants’ careers and/or pose authority and coercion issues. Take care when asking participants questions that might unintentionally exclude or harm interviewees. It’s your responsibility to make sure that a participant's experience in your research session is as smooth and painless as possible, particularly when asking about what isn’t working about a product/service or bringing up potentially sensitive emotional experiences.
- Bad question: How do we get unemployed adults interested in our website? (This question is bad because it isn’t directly focused on users and their goals; it also assumes that a website is the right solution for unemployed adults.)
- Good question: How do unemployed adults navigate their job search in their first six months of unemployment? (This question is good because it seeks to gain a fuller picture of unemployed adults within the context of a specific activity in a specified period of time.)
Consider holding a research alignment workshop to help stakeholders share and discuss what they’re interested in learning. Regardless of how you build alignment, focus on the value of obtaining useful information.
Methods
Choose one or more methods appropriate for meeting your goals and answering your research questions. Multiple methods can help you challenge or verify information collected and create a more complete understanding. 18F’s Methods provide an overview of our preferred research and design methods. Use these as a starting point, not as a list of constraints. See the Research types section of UX Guide for more on which methods might be the best fit for the type of research you are doing.
Team participation
Good research is collaborative. People who help accomplish the research are more likely to agree with its outputs.
When planning your research, review with your partners the typical activities involved in 18F’s research, and determine which members of your partner agency’s team will help at each stage of the research process (that is, plan; do; analyze, synthesize, and share). Including partners in this process helps meet our team’s principles of designing together and coaching advocates.
Consider whose perspective might be missing from the planning process. Identify opportunities to include people who have direct experience using the product or service. Involve people who interact directly and regularly with end users. Ideally we are designing and building with the people who will be impacted by the outcomes of our research, not just for them.
Run a 18F only, frames of reference bias identification workshop so the team can avoid influencing the evidence they gather based on the things they presume to be true. The team should also collectively review this guide's bias and ethics pages to ensure these are accounted for in the research.
Timeline
Your timeline should provide a useful estimate of how your research process will unfold. Remind everyone that the timeline is just an estimate, and that the actual timeline will depend on a few things outside of your control, like your partners’ ability to participate, your participants’ availability (if applicable), etc.
Plan more time than you think you need, and consider especially:
- If your research is meant to inform a decision, note when the team anticipates that it will make that decision (for example, is your research due before the next quarterly planning meeting?)
- How you plan to involve the team in any level-setting exercises, such as hopes and fears [18F methods], provisional personas [18F blog], etc.
- How you plan to handle any participant-related logistics (such as inviting participation, accessibility, getting informed consent, and scheduling)
- If your research involves workshops and/or fieldwork:
- Who needs to be where and when?
- What do they need to do?
- When must they be done?
- Where do they go from there?
- How you plan to involve the team in analysis, synthesis, and sharing
A safe estimate for research analysis is about twice as long as the research itself
Here’s a sample timeline for a contextual inquiry (on site) followed by eight 1-1 interviews (remote) with stakeholders:
Research activity | Estimated time to complete |
---|---|
Initial meeting | 1 day |
Research design (research planning) | 1 day |
Contextual inquiry | 1 day |
Session design | 0.5 day |
Recruiting and scheduling | 1 week |
In-depth interviews (remote) | 1 week |
Initial analysis | 4 days |
Collaborative analysis | 2 days |
Communicating the results | 2 days |
Sharing | 1 day |
Participants and recruiting
Most of 18F’s design research depends on you directly interacting with people. Who those people are matters. Participants are the people you’ll recruit to take part in your research. For planning purposes, recruiting involves identifying participant groups and defining your recruitment criteria relative to your research question.
At 18F we often design for the diverse U.S. public. It's our responsibility to include and learn from people with a range of perspectives and a diversity of needs. We must ensure our products and services are accessible to everyone, regardless of their abilities. This means we need to consider the barriers various groups might face and include people from those groups so we can ensure access.
Identifying participant groups
Because of the time-limited nature of 18F engagements, participant groups can depend on the type of research you're doing and where you’re at in the overall design process. For example, if you’re doing stakeholder interviews as part of a Path Analysis project, you’re likely to learn more about who you need to talk to with each interview you do. We recommend asking “Who else should we speak with?” in these discovery issues. This can help you learn of groups whose needs should be considered. You might focus future rounds of research on learning from people within these groups.
Once you've framed a problem or research hypothesis, it’s important that your participant groups include people who represent the make-up of the public who may experience the problem or need to use the related service. User profiles and personas are a good place to start, if they are based on existing data. Revise them as you learn more about the users of your service.
You may need multiple research plans to account for the variety of ways people may experience or navigate your design. The audience you're designing for may be very broad. It’s not always possible (or preferable) to design a single experience that meets the needs of the entire population. Focus on identifying the goals, behaviors, preferences, obstacles, and past experiences that might shape people’s interactions with the experience you're designing. If you're conducting usability tests, consider how to prototype an experience for someone who uses assistive technology.
Consider especially:
- People who have disabilities or use assistive technologies
- People who have limited digital skills or low literacy
- People who may need help using the service in question
- People who have limited internet access
The Access Board's Section 508 standards require that our designs are accessible to people with disabilities. The best way to make sure our products and services are accessible is to design for these users from the start. Include people with disabilities in your research and usability testing. To learn more about inclusive design, visit Digital.gov's Accessibility for Teams, 18F's Accessibility Guide, or the TTS Accessibility guild 18F only, #g-accessibility.
Defining recruitment criteria
Recruitment criteria specify the people you want to participate in your research. This depends on your research questions. How specific you are in defining your target audiences can differ at different stages of a project. When you’re just getting started with foundational research, your understanding of who you need to recruit might be pretty high-level, but you’ll develop a deeper understanding of the perspectives you’ll want to include in future stages of research.
Example criteria might include:
- A particular demographic (for example, people aged 16 to 24)
- A specific audience (for example, small business owners)
- A particular experience (for example, veterans who've recently moved home)
- A difficult situation (for example, people experiencing a substance use disorder)
- Particular ways of accessing your service (for example, people who rely on a screen reader, use speech recognition software, or who only access the internet at a library, or on a phone)
If you’re doing usability testing, also consider:
- the level of tool knowledge participants need
- the level of domain knowledge participants need
Review your recruitment criteria with your team. Make sure you’re planning to recruit the right people to help answer your research questions.
Participant safety as a recruitment factor
We have a responsibility to further the best interests of the people impacted by our works and to avoid actions that might bring harm to research participants.
- Mitigate the chances of triggering a trauma response by not recruiting people at their most vulnerable.
- For example, if your project is to develop a person centered addiction recovery website, instead of someone actively experiencing addiction you might recruit people who are at least 30 days into their recovery journey or the friends and family of people experiencing addiction or in recovery.
- Protect employee safety during internal research.
- We want stakeholders to be engaged, but having them observe usability tests and interviews with colleagues might cause participants anxiety or diminished honesty in conversations about topics that might have potential repercussions to their jobs or relationships.
Recruitment for demographic diversity
It’s important that the audiences you’re designing products and services for are part of your research. Ensuring diversity helps to design, develop, and deliver technology products and services in ways that:
- Includes the many communities/geographies, identities, races, ethnicities, backgrounds, abilities, cultures, and beliefs of the American people, including underserved communities.
- Leads to consistent and systematic fair, just, and impartial treatment of all individuals, including individuals who belong to underserved communities that have been denied such treatment.
- Recognizes, appreciates, and uses the talents and skills of our employees and end users of all backgrounds.
- Allows all people, including people with disabilities, to fully and independently use them.
There are a few strategies you can use to include research participants who are representative of the users you’re designing for:
- Start recruitment early. Develop your participant pool sooner rather than later and tap into your agency partners’ resources to locate the appropriate participants. It’s often easiest to get access to people who already use a service, but sometimes your research goals involve learning from people who aren’t currently using an existing service. They may be difficult to find. If there’s an existing website, you might be able to add a link to provide feedback with an option to sign up to speak with you.
- Tap into organizations and networks that serve the populations you’re trying to engage. Personal networks can be an okay place to start recruiting from, depending on how representative your networks are. However, it is important to expand past personal networks as soon as possible because asking family, friends, and colleagues to participate in research may contribute to the likelihood of bias impacting the findings.
- Intercept testing in government buildings. Doing in-person research in public buildings that are visited by a wide cross-section of the population like libraries and post offices is one way to reach a diverse group of participants. If you reach a point where you realize you’ve excluded a specific set of users who will use your product/service, intercept testing can be a great save. Intercept testing can be used as a way to be intentional about testing with a specific set of diverse users. For example, setting up intercept testing at a library in a low-income neighborhood might increase a team’s chances of ensuring some of the feedback on a product or service comes directly from low-income users. Here’s a guide on 18F only, how to do intercept research within GSA buildings
Recruitment with non-native English speakers
You could consider many scenarios to include non-native English speakers in research activities, including:
-
Developing multilingual website content: When your public-facing website is offered in multiple languages, you need to ensure the content in each language is written in plain language, accessible, and culturally appropriate. Set up usability sessions to test the content with people who speak these languages.
-
Understanding barriers: When you’re conducting discovery research to understand barriers that people face accessing your service, decide on multiple populations to conduct interviews with, including non-native English speakers.
We recommend recruiting people who speak the language you are focusing on as their native language, and still speak that language in their home. That way, you’re able to screen out native English speakers who learned the target language in school, and do not share the cultural perspectives of native speakers.
Cultural and historical context matters when we decide how to screen for participants who speak certain languages and not others, especially when considering those who live in not just states but tribes and territories, too. If possible, consult on wording for screening questions with colleagues from the communities you want to conduct research with.
There are two distinct modes of engaging with language that you might encounter in the research process: interpreting and translating.
Interpreting: Applies to spoken language in real time or with a delay. For example, including an interpreter in a research interview with someone who only speaks Portuguese, or collaborating with an interviewer who will conduct interviews for you in the language you are focusing on.
Translating: Applies to written content. This can include recruitment materials, your interview script, interview notes, and video clips. When translating materials, it’s essential to consider the cultural context of the demographic you’re recruiting. Some groups will need a little more context, some less. For instance, a newcomer to the U.S. may need a little more information about how to register and vote than a person who grew up in the U.S. and had voting woven into the fabric of their lives.
Translating recruiting materials
If you plan on conducting interviews with people who primarily speak a language other than English, you should translate your recruiting materials into the targeted language(s) to remove a barrier to participating in your study.
Two federal agencies offer translation and interpretation services in hundreds of languages:
- National Language Service Corps within the Department of Defense
- Office of Language Services within the Department of State
See if your partner agency has an Interagency Agreement (IAA) set up with either of these organizations or is open to starting one for this work.
Resources for recruiting non-native English speakers
The Multilingual Community of Practice is a valuable resource for practitioners across government to expand and improve digital content in languages other than English. It includes a mailing list where people can share design research opportunities seeking participants who speak particular languages, among other ideas, challenges and best practices for managing multilingual content and websites.
Recruitment within underserved communities
The Executive Order On Advancing Racial Equity and Support for Underserved Communities Through the Federal Government contains a list of communities historically underserved by the Federal government and defines underserved communities as: “populations sharing a particular characteristic, as well as geographic communities, that have been systematically denied a full opportunity to participate in aspects of economic, social, and civic life, as exemplified by the list in the preceding definition of ‘equity:’”
Examples of underserved communities:
- Black, Latino, Indigenous and Native American people, Asian Americans and Pacific Islanders, and other people of color
- Members of religious minorities
- LGBTQ+ people
- People with disabilities
- People who live in rural areas
- People otherwise adversely affected by persistent poverty or inequality
Partnering with community organizations
When conducting research with people who have been historically underserved, we must make sure we are not contributing to the power imbalances, extractive practices, and harm that led to these communities becoming underserved and marginalized in the first place.
In this research, it is important to partner with community organizations, who work closely to support and advocate for the needs of their communities.
Examples include:
- Worker centers
- Cultural institutions
- Neighborhood organizations
- Advocacy groups
- Legal aid organizations
Learn about the connections your partner agency has with community organizations. See if your agency has a community engagement or outreach team that you could collaborate or consult with who already has established relationships with communities.
Just as you consider diversity in your recruitment criteria for participants, you should also consider diversity in the community organizations you’d like to partner with, such as not only contacting community organizations in urban areas, but rural areas, too.
What if your partner agency does not have established relationships with communities?
If your partner agency does not have established relationships with communities or the organizations that support them, carefully consider whether to move forward with this research.
We encourage you and your partner organization to weigh the tradeoffs of trying to introduce yourselves and the research opportunity. Recruiting is often the most time-consuming part of design research, and without established relationships with communities, you are more likely to encounter delays. It takes a long time to earn trust.
Guiding questions to help you and your partner agency decide to conduct research with communities
Here are some questions to consider as you decide whether to conduct research with communities:
- Does your partner agency have any established relationships with anyone who is part of the community or with the organizations (e.g. grantees) that support them?
- Does your partner agency have a budget for participant compensation?
- Is your partner agency able to invest in translation for recruitment materials and interpreters for interviews if the groups you are interested in include non-native English speakers?
- How long is the research engagement? Do you have enough time and budget scoped into the project for recruiting?
- Is your partner agency able to invest in this relationship beyond the initial research engagement?
- Is your partner agency willing to include the community and/or community organization(s) in a co-design process?
Ideally, many of these questions should be answered in the scoping part of the project before an agreement is signed. This is not always recognized as a need during that process, though.
Make it easy for people to participate
Underserved communities often have limited time and resources, so it is important to be considerate when planning how you will conduct research with them. You can highlight these expectations in your recruitment materials, such as:
- Shorter sessions, or unmoderated engagements that they can incorporate into their own schedule, like diary studies
- Hosting the sessions in a central location, like libraries or community organizations, that are accessible by public transit, or remotely by phone or video conference
- Working with interpreters if people would participate more fully in a language other than English
- Publicizing the dollar amount you’ll compensate each participant and in what form they will receive it. (And take note of any language limitations for your gift card provider. You may need to figure out some workarounds if they don’t offer content in your participants’ preferred language.
To learn about various strategies for creating recruitment materials and screening potential participants, see the “Corresponding with participants” section.
Compensating research participants
GSA can compensate members of the public for participating in design research. We can not compensate government employees. We must do research with people who will actually use our services. See the TTS Handbook for specifics on the process we use to compensate research participants.
Section 508 standards require that our designs are accessible to people with disabilities. The best way to make sure our products and services are accessible is to design for these users from the start. Include people with disabilities in your design research and usability testing. To learn more about inclusive design, visit Accessibility for Teams, 18F's Accessibility Guide, or the TTS Accessibility guild 18F only, #g-accessibility.
Why do we offer compensation?
Compensating participants helps us reduce bias in our research. Not compensating research participants can limit our participant pool to people who have the privilege and flexibility to donate their time. We compensate participants for more than just the time they spend speaking with us. There can be additional costs like transportation, time off from work, and child care. We also compensate to show we value participants' lived experiences and expertise. Sometimes we ask participants to imagine or recall a painful personal experience, including previous difficulties that resulted from interactions with government services.
Providing value to participants
In addition to financial compensation, there are additional ways you can recognize the value of participants’ knowledge and experience that help you with your research. The people who are the focus of our product/service can be left out of the design process once their feedback is initially collected and never know how valuable their contribution was.
- Give credit to the people who would like their participation identified and quote/attribute with care so the research participants recognize their voices while providing the level of anonymity they prefer
- Share decision making with participants where possible, such as making time for topics that they would like to share that are not on your research question list
- Provide access to the data they provided you and the outcomes of your research where possible
- Share resources for additional information on the topic or finding out answers to questions they may have
- Share outcomes of the research, and how the research impacted the final product
- Engage and re-engage with participants throughout the design process, sharing research synthesis with opportunities for clarification
Outputs and outcomes
Before you get started, discuss with your team (including your agency partners) the desired outputs and outcomes of the research.
- Outputs are the documents, diagrams, etc. you will make to share the research with a broad audience. Will you produce a report, useful insights, validated design hypotheses, or something else?
- Outcomes are the changes you expect to see through doing the research. Outcomes should tie back to the goals and subgoals listed earlier. How will doing the research impact the product being developed, the people involved, etc.? How will you know?
We follow a lean, iterative process, which allows the team to be more responsive and flexible to redefine outputs based on what the process finds. Avoid over-specifying your outputs, because you don't know what you’ll find until the research is underway. For example, it’s safer to say “We’ll produce a persona” (a type of artifact) than it is to commit to “We’ll provide 10 useful insights,” because it’s difficult to know how many useful insights the research will produce. That said, discussing possible outputs is useful because it can directly affect how you choose to document the research.
Involving partners in research planning
Hold a meeting to bring the team — including your agency partners — together to agree on the research plan. Tailor the agenda to your project’s history and your partner’s design maturity. For example, if your partner doesn’t yet have personas, you might create provisional personas before the planning meeting; if your partner hasn’t ever planned research before, you might draft a plan for them to respond to. Be ready to educate your partners on the methods you chose and why you chose them, provide example outputs from prior research, etc.
Create an agenda and invite anyone who has an interest in the team’s research. Depending on where you’re at in the design process, you might begin the meeting with level-setting exercises such as:
- Hopes and fears exercise [18F Design methods]
- User groups identification
Next, review and confirm elements listed in the research plan. It’s especially important to confirm:
- The timeline
- What you hope to learn or do (outcomes)
- What you plan to produce (outputs)
- How the team will participate in the research
An example agenda for a research planning meeting might include:
Activity | Time |
---|---|
Introductions | 9:00am |
Hopes and fears | 9:30 |
Knowledge inventory | 10:00 |
Discuss research goals | 10:30 |
Review (or co-create) research plan | 11:00 |
Discuss participants and recruiting | 12:00pm |
Lunch | 12:30 |
Review (or co-create) session materials (such as interview guides, wireframes, or prototypes) | 1:30 |
Discuss desired outputs and outcomes | 2:30 |
Establish roles | 3:00pm |
Documenting research
Set up a roster
A roster is a spreadsheet to collect participants’ names, titles, contact information, and to track whether they’ve been contacted, interviewed, thanked, etc. A roster should note if specific people have opted out of the research.
Create a folder to contain your roster, interview guides, session recordings and notes, etc. This folder should also be accessible only to the core team, as it will likely contain personally identifiable information (PII); see Privacy. A good way to share interview notes without jeopardizing PII is to assign each participant a participant number, e.g. "p1," and refer to those in calendar invitations and notes documents. Destroy this roster at the end of the engagement.
Documenting the sessions
Session documentation can take many forms. We often conduct research that may cover sensitive topics or information. Consider the following as you decide how you will document your sessions:
- What is the lightest-weight way to document your sessions and still capture the information you need to create your desired outputs, conduct shared analysis, etc.?
- What type of documentation will your participants be most comfortable with (see Privacy)?
- Did you ask your participants for consent for this form of documentation?
Documentation methods
-
Verbatim notes - This is the most common type of note-taking by 18F researchers. Write down everything the participant says, to the extent possible, during each session. The goal is to capture as much as possible during the precious time we have with our participants, and avoid introducing cognitive biases that come into play when we are selective about what we write.
Taking verbatim notes also curbs the natural tendency to want to understand and analyze what is being said. If you’re having trouble writing everything down, focus on capturing what the interviewee says, since you or the interviewer can always go back and clarify what questions the interviewer asked. -
Interaction notes - Write down all of the actions people take and the reactions they have. For example, capturing a note such as “scrolled to top of page, re-read instructions, scrolled back down to input field and typed in name” would be sufficient. If conducting usability testing, consider flagging bugs or usability issues.
Note: If there are two notetakers available for a session, consider having one person take verbatim notes, and the other take interaction notes. In this case, it’s best to work in separate documents, as working too close to each other in the same file can be distracting. -
Spreadsheet notes - These are most commonly used for content audits to track insights and quality of existing content.
-
Sticky notes (digital or physical) - Frequently used in workshop and collaborative settings. We have a subscription to a digital tool for remote workshops and collaboration. Physical stickies will need to be documented via photos or transposed to digital tools.
-
Photography - Highly recommended for workshops! During workshops with government stakeholders you don’t need consent forms, but you should still ask for permission if you are taking photos of participants.
-
Video recording - Many of our interviews are done via video chat. You can record sessions from within the video conferencing apps themselves, or you can use video recording software to capture other types of recordings, provided you have participant consent. One of our video chat software options includes automated transcription. Transcription quality varies depending on speaking style of people being recorded.
-
Voice recording - You can also make an audio recording in lieu of video, which can be helpful if you need to review portions of a session. You can record interviews using the voice memo app on your work phone.
-
Transcripts - If you would like to obtain full transcripts of your recordings, you can do so by submitting a micropurchase (under $10,000) request for the service to the TTS Office of Acquisitions team. Consult with OA about whether an Open Market Justification form is needed.
Regardless of the method you choose, keep in mind the overall reasons why we document research as you proceed:
- Team members who can’t attend the sessions can look at the notes and get a very clear sense of what the user said and did;
- When even attendees’ memories eventually fade, we can refer back to the notes; and
- We create a starting point for analysis and synthesis.
While most of 18F research methods are exempt under the Paperwork Reduction Act (PRA) clearance process, be sure to review legal considerations at the start and throughout the process to avoid any possible issues. Individual methods’ pages may have further PRA considerations.