Skip to main content
U.S. flag

An official website of the United States government

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Validate

Test a design hypothesis.

Card sorting

What

A categorization exercise in which participants divide concepts into different groups based on their understanding of those concepts.

Why

To gain insights from users about how to organize content in an intuitive way.

Time required

15–30 minutes per user

How to do it

There are two types of card sorting: open and closed. Most card sorts are performed with one user at a time, but you can also do the exercise with groups of two to three people.

Open card sort

  1. Give users a collection of content represented on cards.
  2. Ask users to separate the cards into whatever categories make sense to them.
  3. Ask users to label those categories.
  4. Ask users to tell you why they grouped the cards and labeled the categories as they did.

Closed card sort

  1. Give users a collection of content represented on cards.
  2. Ask users to separate the cards into a list of categories you have predefined.
  3. Ask users to tell you why they assigned cards to the categories they did.

Example from 18F

Additional resources

Considerations for use in government

No PRA implications if done as directly moderated sessions. The PRA explicitly exempts direct observation and non-standardized conversation, 5 CFR 1320.3(h)3.

18F

Multivariate testing

What

A test of variations to multiple sections or features of a page to see which combination of variants has the greatest effect. Different from an A/B test, which tests variation to just one section or feature.

Why

To incorporate different contexts, channels, or user types into addressing a user need. Situating a call to action, content section, or feature set differently can help you build a more effective whole solution from a set of partial solutions.

Time required

2–5 days of effort, 1–4 weeks elapsed through the testing period

How to do it

  1. Identify the call to action, content section, or feature that needs to be improved to increase conversion rates or user engagement.
  2. Develop a list of possible issues that may be hurting conversion rates or engagement. Specify in advance what you are optimizing for (possibly through design hypothesis).
  3. Design several solutions that aim to address the issues listed. Each solution should attempt to address every issue by using a unique combination of variants so each solution can be compared fairly.
  4. Use a web analytics tool that supports multivariate testing, such as Google Website Optimizer or Visual Website Optimizer, to set up the testing environment. Conduct the test for long enough to produce statistically significant results.
  5. Analyze the testing results to determine which solution produced the best conversion or engagement rates. Review the other solutions, as well, to see if there is information worth examining in with future studies.

Considerations for use in government

No PRA implications. No one asks the users questions, so the PRA does not apply. See the methods for Recruiting and Privacy for more tips on taking input from the public.

18F

Success metrics

What

How to measure whether your design is successful in achieving the intended outcomes.

Why

No matter what you’re designing (software, a service, content, etc.), you need to define what your design should enable people to do and how you’ll measure whether it performs well.

Time required

2-3 hours to brainstorm and select metrics.
Length of prototype run to collect data.

How to do it

  1. Define the outcomes you expect when your design performs well. Invite stakeholders to brainstorm. Your outcomes may be based on your design hypotheses or user needs statements.
  2. List ways to measure whether your design is achieving expected outcomes. Consider quantitative and qualitative metrics:
    1. Quantitative metrics are numerical indicators (e.g. time on task, click-through rates).
    2. Qualitative metrics capture subjective feedback and insights (e.g. usability ratings, user satisfaction).
  3. Determine how you will collect data for each metric. This may involve user interviews, usability testing, surveys, and/or analytics. Choose metrics that are effective and measurable.
  4. Plan who will be responsible for data collection and how often metrics will be reviewed.
  5. Establish benchmarks for each metric. Benchmarks show whether the design meets peoples’ needs or should be further refined.
  6. Test your design, and evaluate success based on your measurement plan.
  7. Analyze the results against your benchmarks. Identify strengths and areas of improvement. Iterate on the design prototype. Refine metrics if necessary.

Additional resources

Considerations for use in government

Surveys of more than 9 people require PRA approval. Learn more at PRA.Digital.gov. Direct observation and non-standardized conversations like semi-structured interviews) are not subject to PRA.

Your prototype may not be the only effort being tested. People may be overwhelmed by the amount of data they’re asked to provide. Finding metrics that can be automatically collected can help relieve this burden.

18F

Usability testing

What

Observing users as they attempt to use a product or service while thinking out loud.

Why

To better understand how intuitive the team’s design is, and how adaptable it is to meeting user needs.

Time required

30 minutes to 1 hour per test

How to do it

  1. Pick what you’ll test. Choose something, such as a sketch, prototype, or even a "competitor’s product" that might help users accomplish their goals.
  2. Plan the test. Align your team on the scenarios the test will focus on, which users should participate (and how you’ll recruit them), and which team members will moderate and observe. Prepare a usability test script.
  3. Recruit users and inform their consent. Provide a way for potential participants to sign up for the test. Pass along to participants an agreement explaining what participation will entail. Clarify any logistical expectations, such as screen sharing, and how you’ll share links or files of whatever it is you’re testing.
  4. Run the tests. Moderators should verbally confirm with the participant that it’s okay to record the test, ask participants to think outloud, and guide the participant through the session. Observers should contribute to a rolling issues log and relay any in-session questions to the moderator, refraining from interrupting the session from the participant’s point of view. Engage your team in a post-interview debrief after each test.
  5. Discuss the results. Schedule a collaborative synthesis meeting to discuss issues you observed, and any questions these tests raise concerning user needs. Conclude the meeting by determining how the team will use what it learned in service of future design decisions. main

Examples from 18F

Additional resources

Considerations for use in government

No PRA implications. First, any given usability test should involve nine or fewer users. Additionally, the PRA explicitly exempts direct observation and non-standardized conversation, 5 CFR 1320.3(h)3. It also specifically excludes tests of knowledge or aptitude, 5 CFR 1320.3(h)7, which is essentially what a usability test tests. See the methods for Recruiting and Privacy for more tips on taking input from the public.

18F

Visual preference testing

What

A method that allows potential users to review and provide feedback on a solution’s visual direction.

Why

To align the established branding guidelines and attributes of a solution with the way end users view the overall brand and emotional feel.

Time required

4-12 hours for style tiles. 30 minutes per participant to get feedback.

How to do it

  1. Create iterations of a style tiles or other asset that represent directions a final visual design may follow. If branding guidelines or attributes don’t exist, establish them with stakeholders beforehand.
  2. Interview participants about their reactions.
    • Ask questions as objectively as possible.
    • Align questions with the branding guidelines and attributes your project must incorporate.
    • As far as possible, allow participants to provide their feedback unmoderated or at the end of your research.
  3. Compare the results of your research with the agency’s published branding guidelines and attributes.
  4. Publish the results to the complete product team and decide which direction will guide future design efforts.

Considerations for use in government

No PRA implications. The PRA explicitly exempts direct observation and non-standardized conversation, 5 CFR 1320.3(h)3. See the methods for Recruiting and Privacy for more tips on taking input from the public.

18F

18F Methods

An official website of the GSA’s Technology Transformation Services

Looking for U.S. government information and services?
Visit USA.gov