Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Quantitative Research

Learn more
1 min read

Mixed methods research in 2021

User experience research is super important to developing a product that truly engages, compels and energises people. We all want a website that is easy to navigate, simple to follow and compels our users to finish their tasks. Or an app that supports and drives engagement.

We’ve talked a lot about the various types of research tools that help improve these outcomes. 

There is a rising research trend in 2021.

Mixed method research - what is more compelling than these user research quantitative tools? Combining these with awesome qualitative research! Asking the same questions in various ways can provide deeper insights into how our users think and operate. Empowering you to develop products that truly talk to your users, answer their queries or even address their frustrations.

Though it isn’t enough to simply ‘do research’, as with anything you need to approach it with strategy, focus and direction. This will funnel your time, money and energy into areas that will generate the best results.

Mixed Method UX research is the research trend of 2021

With the likes of Facebook, Amazon, Etsy, eBay, Ford and many more big organizations offering newly formed job openings for mixed methods researchers it becomes very obvious where the research trend is heading.

It’s not only good to have, but now becoming imperative, to gather data, dive deeper and generate insights that provide more information on our users than ever before. And you don't need to be Facebook to reap the benefits. Mixed method research can be implemented across the board and can be as narrow as finding out how your homepage is performing through to analysing in depth the entirety of your product design.

And with all of these massive organizations making the move to increase their data collection and research teams. Why wouldn’t you?

The value in mixed method research is profound. Imagine understanding what, where, how and why your customers would want to use your service. And catering directly for them. The more we understand our customers, the deeper the relationship and the more likely we are to keep them engaged.

Although of course by diving deep into the reasons our users like (or don’t like) how our products operate can drive your organization to target and operate better at a higher level. Gearing your energies to attracting and keeping the right type of customer, providing the right level of service and after care. Potentially reducing overheads, by not delivering to expected levels.

What is mixed method research?

Mixed methods research isn’t overly complicated, and doesn’t take years for you to master. It simply is a term used to refer to using a combination of quantitative and qualitative data. This may mean using a research tool such as card sorting alongside interviews with users. 

Quantitative research is the tangible numbers and metrics that can be gathered through user research such as card sorting or tree testing.

Qualitative research is research around users’ behaviour and experiences. This can be through usability tests, interviews or surveys.

For instance you may be asking ‘how should I order the products on my site?’. With card sorting you can get the data insights that will inform how a user would like to see the products sorted. Coupled with interviews you will get the why.

Understanding the thinking behind the order, and why one user likes to see gym shorts stored under shorts and another would like to see them under active wear. With a deeper understanding of how and why users decide how content should be sorted are made will create a highly intuitive website. 

Another great reason for mixed method research would be to back up data insights for stakeholders. With a depth and breadth of qualitative and quantitative research informing decisions, it becomes clearer why changes may need to be made, or product designs need to be challenged.

How to do mixed method research

Take a look at our article for more examples of the uses of mixed method research. 

Simply put mixed method research means coupling quantitative research, such as tree testing, card sorting or first click testing, with qualitative research such as surveys, interviews or diary entry.

Say, for instance, the product manager has identified that there is an issue with keeping users engaged on the homepage of your website. We would start with asking where they get stuck, and when they are leaving.

This can be done using a first-click tool, such as Chalkmark, which will map where users head when they land on your homepage and beyond. 

This will give you the initial qualitative data. However, it may only give you some of the picture. Coupled with qualitative data, such as watching (and reporting on) body language. Or conducting interviews with users directly after their experience so we can understand why they found the process confusing or misleading.

A fuller picture, means a better understanding.

Key is to identify what your question is and honing in on this through both methods. Ultimately, we are answering your question from both sides of the coin.

Upcoming research trends to watch

Keeping an eye on the progression of the mixed method research trend, will mean keeping an eye on these:

1. Integrated Surveys

Rather than thinking of user surveys as being a one time, in person event, we’re seeing more and more often surveys being implemented through social media, on websites and through email. This means that data can be gathered frequently and across the board. This longitude data allows organizations to continuously analyse, interpret and improve products without really ever stopping. 

Rather than relying on users' memories for events and experiences data can be gathered in the moment. At the time of purchase or interaction. Increasing the reliability and quality of the data collected. 

2. Return to the social research

Customer research is rooted in the focus group. The collection of participants in one space, that allows them to voice their opinions and reach insights collectively. This did used to be an overwhelming task with days or even weeks to analyse unstructured forums and group discussions.

However, now with the advent of online research tools this can also be a way to round out mixed method research.

3. Co-creation

The ability to use your customers input to build better products. This has long been thought a way to increase innovative development. Until recently it too has been cumbersome and difficult to wrangle more than a few participants. But, there are a number of resources in development that will make co-creation the buzzword of the decade.

4. Owned Panels & Community

Beyond community engagement in the social sphere. There is a massive opportunity to utilise these engaged users in product development. Through a trusted forum, users are far more likely to actively and willingly participate in research. Providing insights into the community that will drive stronger product outcomes.

What does this all mean for me

So, there is a lot to keep in mind when conducting any effective user research. And there are a lot of very compelling reasons to do mixed method research and do it regularly. 

To remain innovative, and ahead of the ball it remains very important to be engaged with your users and their needs. Using qualitative and qualitative research to inform product decisions means you can operate knowing a fuller picture.

One of the biggest challenges with user research can be the coordination and participant recruitment. That’s where we come in.

Taking the pain out of the process and streamlining your research. Take a look at our Qualitative Research option, Reframer. Giving you an insight into how we can help make your mixed method research easier and analyse your data efficiently and in a format that is easy to understand.

User research doesn’t need to take weeks or months. With our participant recruitment we can provide reliable and quality participants across the board that will provide data you can rely on.

Why not get in deeper with mixed method research today!

Learn more
1 min read

How to create a UX research plan

Summary: A detailed UX research plan helps you keep your overarching research goals in mind as you work through the logistics of a research project.

There’s nothing quite like the feeling of sitting down to interview one of your users, steering the conversation in interesting directions and taking note of valuable comments and insights. But, as every researcher knows, it’s also easy to get carried away. Sometimes, the very process of user research can be so engrossing that you forget the reason you’re there in the first place, or unexpected things that come up that can force you to change course or focus.

This is where a UX research plan comes into play. Taking the time to set up a detailed overview of your high-level research goals, team, budget and timeframe will give your research the best chance of succeeding. It's also a good tool for fostering alignment - it can make sure everyone working on the project is clear on the objectives and timeframes. Over the course of your project, you can refer back to your plan – a single source of truth. After all, as Benjamin Franklin famously said: “By failing to prepare, you are preparing to fail”.

In this article, we’re going to take a look at the best way to put together a research plan.

Your research recipe for success

Any project needs a plan to be successful, and user research is no different. As we pointed out above, a solid plan will help to keep you focused and on track during your research – something that can understandably become quite tricky as you dive further down the research rabbit hole, pursuing interesting conversations during user interviews and running usability tests. Thought of another way, it’s really about accountability. Even if your initial goal is something quite broad like “find out what’s wrong with our website”, it’s important to have a plan that will help you to identify when you’ve actually discovered what’s wrong.

So what does a UX research plan look like? It’s basically a document that outlines the where, why, who, how and what of your research project.

It’s time to create your research plan! Here’s everything you need to consider when putting this plan together.

Make a list of your stakeholders

The first thing you need to do is work out who the stakeholders are on your project. These are the people who have a stake in your research and stand to benefit from the results. In those instances where you’ve been directed to carry out a piece of research you’ll likely know who these people are, but sometimes it can be a little tricky. Stakeholders could be C-level executives, your customer support team, sales people or product teams. If you’re working in an agency or you’re freelancing, these could be your clients.

Make a list of everyone you think needs to be consulted and then start setting up catch-up sessions to get their input. Having a list of stakeholders also makes it easy to deliver insights back to these people at the end of your research project, as well as identify any possible avenues for further research. This also helps you identify who to involve in your research (not just report findings back to).

Action: Make a list of all of your stakeholders.

Write your research questions

Before we get into timeframes and budgets you first need to determine your research questions, also known as your research objectives. These are the ‘why’ of your research. Why are you carrying out this research? What do you hope to achieve by doing all of this work? Your objectives should be informed by discussions with your stakeholders, as well as any other previous learnings you can uncover. Think of past customer support discussions and sales conversations with potential customers.

Here are a few examples of basic research questions to get you thinking. These questions should be actionable and specific, like the examples we’ve listed here:

  • “How do people currently use the wishlist feature on our website?”
  • “How do our current customers go about tracking their orders?”
  • “How do people make a decision on which power company to use?”
  • “What actions do our customers take when they’re thinking about buying a new TV?”

A good research question should be actionable in the sense that you can identify a clear way to attempt to answer it, and specific in that you’ll know when you’ve found the answer you’re looking for. It's also important to keep in mind that your research questions are not the questions you ask during your research sessions - they should be broad enough that they allow you to formulate a list of tasks or questions to help understand the problem space.

Action: Create a list of possible research questions, then prioritize them after speaking with stakeholders.

What is your budget?

Your budget will play a role in how you conduct your research, and possibly the amount of data you're able to gather.

Having a large budget will give you flexibility. You’ll be able to attract large numbers of participants, either by running paid recruitment campaigns on social media or using a dedicated participant recruitment service. A larger budget helps you target more people, but also target more specific people through dedicated participant services as well as recruitment agencies.

Note that more money doesn't always equal better access to tools - e.g. if I work for a company that is super strict on security, I might not be able to use any tools at all. But it does make it easier to choose appropriate methods and that allow you to deliver quality insights. E.g. a big budget might allow you to travel, or do more in-person research which is otherwise quite expensive.

With a small budget, you’ll have to think carefully about how you’ll reward participants, as well as the number of participants you can test. You may also find that your budget limits the tools you can use for your testing. That said, you shouldn’t let your budget dictate your research. You just have to get creative!

Action: Work out what the budget is for your research project. It’s also good to map out several cheaper alternatives that you can pursue if required.

How long will your project take?

How long do you think your user research project will take? This is a necessary consideration, especially if you’ve got people who are expecting to see the results of your research. For example, your organization’s marketing team may be waiting for some of your exploratory research in order to build customer personas. Or, a product team may be waiting to see the results of your first-click test before developing a new signup page on your website.

It’s true that qualitative research often doesn’t have a clear end in the way that quantitative research does, for example as you identify new things to test and research. In this case, you may want to break up your research into different sub-projects and attach deadlines to each of them.

Action: Figure out how long your research project is likely to take. If you’re mixing qualitative and quantitative research, split your project timeframe into sub-projects to make assigning deadlines easier.

Understanding participant recruitment

Who you recruit for your research comes from your research questions. Who can best give you the answers you need? While you can often find participants by working with your customer support, sales and marketing teams, certain research questions may require you to look further afield.

The methods you use to carry out your research will also have a part to play in your participants, specifically in terms of the numbers required. For qualitative research methods like interviews and usability tests, you may find you’re able to gather enough useful data after speaking with 5 people. For quantitative methods like card sorts and tree tests, it’s best to have at least 30 participants. You can read more about participant numbers in this Nielsen Norman article.

At this stage of the research plan process, you’ll also want to write some screening questions. These are what you’ll use to identify potential participants by asking about their characteristics and experience.

Action: Define the participants you’ll need to include in your research project, and where you plan to source them. This may require going outside of your existing user base.

Which research methods will you use?

The research methods you use should be informed by your research questions. Some questions are best answered by quantitative research methods like surveys or A/B tests, with others by qualitative methods like contextual inquiries, user interviews and usability tests. You’ll also find that some questions are best answered by multiple methods, in what’s known as mixed methods research.

If you’re not sure which method to use, carefully consider your question. If we go back to one of our earlier research question examples: “How do our current customers go about tracking their orders?”, we’d want to test the navigation pathways.

If you’re not sure which method to use, it helps to carefully consider your research question. Let’s use one of our earlier examples: “Is it easy for users to check their order history in our iPhone app?” as en example. In this case, because we want to see how users move through our app, we need a method that’s suited to testing navigation pathways – like tree testing.

For the question: “What actions do our customers take when they’re thinking about buying a new TV?”, we’d want to take a different approach. Because this is more of an exploratory question, we’re probably best to carry out a round of user interviews and ask questions about their process for buying a TV.

Action: Before diving in and setting up a card sort, consider which method is best suited to answer your research question.

Develop your research protocol

A protocol is essentially a script for your user research. For the most part, it’s a list of the tasks and questions you want to cover in your in-person sessions. But, it doesn’t apply to all research types. For example, for a tree test, you might write your tasks, but this isn't really a script or protocol.

Writing your protocol should start with actually thinking about what these questions will be and getting feedback on them, as well as:

  • The tasks you want your participants to do (usability testing)
  • How much time you’ve set aside for the session
  • A script or description that you can use for every session
  • Your process for recording the interviews, including how you’ll look after participant data.

Action: This is essentially a research plan within a research plan – it’s what you’d take to every session.

Happy researching!

Related UX plan reading

Learn more
1 min read

5 common mistakes we have all made with screening questions

This is a guest post from our friends over at Askable. Check out their blog.

Writing screening questions is an everyday part of life as a UXer or researcher of any kind, really. And at first glance, they seem straightforward enough. Draft up some questions that help to either qualify or disqualify people from taking part in your research, whether that’s a survey, an interview or something in between.

At Askable, we have seen thousands and thousands of screening questions. Some horrible and some amazing - and everything in between.

So here we go – 5 of the most common mistakes made when writing screening questions – oh and how to avoid them.

  1. Using closed-ended questions too often

What’s the quickest way of knowing if someone went on a holiday in the last 6 months… You ask them, right? “Have you been on a holiday in the last 6 months – Yes or No?”. Duh.

But actually, a question worded in this way is signposting the answer you’re looking for, which may lead to false answers! And also, the answer doesn’t give you any extra information about that person’s travel habits, etc.

So, perhaps a better way to ask the question would be: “When was the last time you went on a holiday?” Provide multiple choices. This also gives you that added info, like if it was a month ago or 5 months ago, in this case.

  1. Using open-ended questions at the wrong time

Open-ended screening questions can be great, but only for certain info. Avoid using them when you have strict criteria. But instead, use them for getting inside your applicant’s head a bit more. An example would be to ask as a follow up to the example above “Tell me about where you went on your last holiday”.

Open-ended questions are also fine when the answers could vary wildly. A good example is “What is your occupation”. There are simply way too many possible responses to have as a multi-choice.

  1. Using industry jargon

How many people in the general public know what EV stands for? It’s Electric Vehicle by the way.

Or how about the term ‘Financial Services’? Are we talking about a bank or payments company or an accountant?

Work off the lowest common denominator, assume the applicant doesn’t know anything about your industry. Because often, they don’t or they think of it differently to you. When we live and breathe a topic, it’s all too easy to forget that others do not.

  1. Too many screening questions

We often write too many screening questions for a number of reasons. Sometimes we do it because we forget that screening questions are just that – to screen. Not to survey! Don’t start adding questions in there that are actually part of your research.

Other times it can be because our criteria is just way too narrow. Whatever the reason, a good rule of thumb is to never have more than 15 and the less the better.

  1. Not trusting the majority

We have learned this time and time again at Askable – most people are good and honest! We even have a saying now for it – “default to honesty”.

Don’t get overly concerned that your screening questions give too much away. Of course, keep it vague, but don’t go crazy. The 99% of people in our experience won’t take advantage of you. So serve the 99 and not the 1.

Wrap Up

Think about these next time you are writing up some screening questions, setting up your research or trying to figure out who it is you really want to talk with. Do this and you will be on your way to some seriously awesome and accurate insights!

Learn more
1 min read

5 reasons to consider unmoderated user testing

In-person user testing is an important part of any research project, helping you to see first-hand how your users interact with your prototypes and products – but what are you supposed to do when it’s not a viable option?

The recent outbreak of coronavirus is forcing user researchers and designers to rethink traditional testing approaches, especially in-person testing. So what’s the solution? Enter unmoderated user testing. This approach circumvents the need to meet your participants face-to-face as it’s done entirely over the internet. As you can probably guess, this also means there are a few considerable benefits.

Here, we'll take a look at 5 reasons to consider this testing approach. But first, let's explore what unmoderated user testing is.

What is unmoderated user testing?

In the most basic sense, unmoderated user testing removes the ‘moderated’ part of the equation. Instead of having a facilitator guide participants through the test, participants complete the testing activity by themselves and in their own time. For the most part, everything else stays the same.

The key differences are that:

  • You’re unable to ask follow-up questions
  • You can’t use low-fidelity prototypes
  • You can’t support participants (beyond the initial instructions you send them).

However, there are a number of upsides to unmoderated user testing, which we’ll cover below.

1. You can access participants from all over the globe

There’s a good chance that your organization’s customers don’t exist solely in your city, or even just in your country, so why limit yourself to testing local individuals? Moderated user testing requires you to either bring in people who can visit your office or for you to actually travel to another physical location and host testing sessions there.

With unmoderated user testing, you can use a variety of participant recruitment services to access large groups of participants from all over the globe. Making these services even more useful is the fact many allow you to recruit the exact participants you need. For example, drivers of Toyota hybrid vehicles who live in Calgary.

2. Unmoderated user testing is cheaper

Have a think for a moment about all of the typical costs that go along with a hosted user testing session. There’s the cost of a location if you’re traveling to another city, the remuneration for the people you’re testing and the cost of equipment (that you may not typically have access to). Sure, moderated testing can be made much more affordable if you’re hosting a session in your own office and you have all of the required gear, but that’s not the case for everyone doing research.

Unmoderated user testing really only requires you to choose the tool with which you want to run your user test (variable cost), set up your study and then send out the link to your participants.

3. It’s easier to manage

Unmoderated user testing means you can set aside the difficult task of managing participants in person, from scheduling through to finding notetakers and people to help you with the recording equipment. As we noted in the above section about cost, all you have to do is select the tool and then set up and run your study.

4. Automatic analysis

Running in-person, qualitative usability testing sessions can deliver some truly useful insights. There’s really nothing like sitting down in front of a participant and seeing how they interact with the product you’re working on, hearing their frustrations and learning about how they work. But any insights you gain from these sessions you’ll have to derive yourself. There’s no magic button that can generate useful qualitative analysis for you.

With unmoderated user testing, and especially with the right set of tools, you can run your tests and then have analysis generated automatically from your data. Take our IA tool Treejack as just one example. The functionality built into the tool means you can send out text-based versions of your website structure and then see how people make their way through the website to find what they’re looking for. At the end of your test, Treejack will present you with an array of useful, detailed visualizations like this one:

A Treejack pietree.
A Treejack pietree.

5. There’s less chance of bias

Ever heard of the observer effect? It’s a theory that basically states that the observation of a phenomenon will inevitably change that phenomenon, commonly due to the instruments used in the measurement. The observer effect and other biases often come into play during moderated research sessions specifically as a result of having a moderator in the room – typically with their own biases. Removing the moderator from the equation means you’ll get more reliable data from your study.

And the best place to get started?

Unmoderated user research requires unmoderated testing tools. With health concerns like coronavirus and influenza leading to reduced travel and in turn making in-person testing more difficult, there’s never been a better time to start using unmoderated testing tools. If you haven’t already, take our full set of 5 tools for a spin for free (no credit card required).

Learn more
1 min read

3 ways you can combine OptimalSort and Chalkmark in your design process

As UX professionals we know the value of card sorting when building an IA or making sense of our content and we know that first clicks and first impressions of our designs matter. Tools like OptimalSort and Chalkmark are two of our wonderful design partners in crime, but did you also know that they work really well with each other? They have a lot in common and they also complement each other through their different strengths and abilities. Here are 3 ways that you can make the most of this wonderful team up in your design process.

1. Test the viability of your concepts and find out which one your users prefer most

Imagine you’re at a point in your design process where you’ve done some research and you’ve fed all those juicy insights into your design process and have come up with a bunch of initial visual design concepts that you’d love to test.

You might approach this by following this 3 step process:

  • Test the viability of your concepts in Chalkmark before investing in interaction design work
  • Iterate your design based on your findings in Step 1
  • Finish by running a preference test with a closed image based card sort in OptimalSort to find out which of your concepts is most preferred by your users

There are two ways you could run this approach: remotely or in person. The remote option is great for when you’re short on time and budget or for when your users are all over the world or otherwise challenging to reach quickly and cheaply. If you’re running it remotely, you would start by popping images of your concepts in whatever state of fidelity they are up to into Chalkmark and coming up with some scenario based tasks for your participants to complete against those flat designs. Chalkmark is super nifty in the way that it gets people to just click on an image to indicate where they would start out when completing a task. That image can be a rough sketch or a screenshot of a high fidelity prototype or live product — it could be anything! Chalkmark studies are quick and painless for participants and great for designers because the results will show if your design is setting your users up for success from the word go. Just choose the most common tasks a user would need to complete on your website or app and send it out.

Next, you would review your Chalkmark results and make any changes or iterations to your designs based on your findings. Choose a maximum of 3 designs to move forward with for the last part of this study. The point of this is to narrow your options down and figure out through research, which design concept you should focus on. Create images of your chosen 3 designs and build a closed card sort in OptimalSort with image based cards by selecting the checkbox for ‘Add card images’ in the tool (see below).


How to add card images
Turn your cards into image based cards in OptimalSort by selecting the ‘Add card images’ checkbox on the right hand side of the screen.


The reason why you want a closed card sort is because that’s how your participants will indicate their preference for or against each concept to you. When creating the study in OptimalSort, name your categories something along the lines of ‘Most preferred’, ‘Least preferred’ and ‘Neutral’. Totally up to you what you call them — if you’re able to, I’d encourage you to have some fun with it and make your study as engaging as possible for your participants!

Naming your categories for preference testing
Naming your card categories for preference testing with an image based closed card sort study in OptimalSort

Limit the number of cards that can be sorted into each category to 1 and uncheck the box labelled ‘Randomize category order’ so that you know exactly how they’re appearing to participants — it’s best if the negative one doesn’t appear first because we’re mostly trying to figure out what people do prefer and the only way to stop that is to switch the randomization off. You could put the neutral option at the end or in the middle to balance it out — totally up to you.

It’s also really important that you include a post study questionnaire to dig into why they made the choices they did. It’s one thing to know what people do and don’t prefer, but it’s also really important to additionally capture the reasoning behind their thinking. It could be something as simple as “Why did you chose that particular option as your most preferred?” and given how important this context is, I would set that question to ‘required’. You may still end up with not-so helpful responses like ‘Because I like the colors’ but it’s still better than nothing — especially if your users are on the other side of the world or you’re being squeezed by some other constraint! It’s something to be mindful of and remember that studies like these contribute to the large amount of research that goes on throughout a project and are not the only piece of research you’ll be running. You’re not pinning all your design’s hopes and dreams on this one study! You’re just trying to quickly find out what people prefer at this point in time and as your process continues, your design will evolve and grow.

You might also ask the same context gathering question for the least preferred option and consider also including an optional question that allows them to share any other thoughts they might have on the activity they just completed — you never know what you might uncover!

If you were running this in person, you could use it to form the basis for a moderated codesign session. You would start your session by running the Chalkmark study to gauge their first impressions and find out where those first clicks are landing and also have a conversation about what your participants are thinking and feeling while they’re completing those tasks with your concepts. Next, you could work with your participants to iterate and refine your concepts together. You could do it digitally or you could just draw them out on paper — it doesn't have to be perfect! Lastly, you could complete your codesign session by running that closed card sort preference test as a moderated study using barcodes printed from OptimalSort (found under the ‘Cards’ tab during the build process) giving you the best of both worlds — conversations with your participants plus analysis made easy! The moderated approach will also allow you to dig deeper into the reasoning behind their preferences.

2. Test your IA through two different lenses: non visual and visual

Your information architecture (IA) is the skeleton structure of your website or app and it can be really valuable to evaluate it from two different angles: non-visual and visual. The non-visual elements of an IA are: language, content, categories and labelling and these are great because they provide a clear and clean starting point. There’s no visual distractions and getting that content right is rightfully so a high priority. The visual elements come along later and build upon that picture and help provide context and bring your design to life. It's a good idea to test your IA through both lenses throughout your design process to ensure that nothing is getting lost or muddied as your design evolves and grows.

Let’s say you’ve already run an open card sort to find out how your users expect your content to be organised and you’ve created your draft IA. You may have also tested and iterated that IA in reverse through a tree test in Treejack and are now starting to sketch up some concepts for the beginnings of the interaction design stages of your work.

At this point in the process, you might run a closed card sort with OptimalSort on your growing IA to ensure that those top level category labels are aligning to user expectations while also running a Chalkmark study on your early visual designs to see how the results from both approaches compare.

When building your closed card sort study, you would set your predetermined categories to match your IA’s top level labels and would then have your participants sort the content that lies beneath into those groups. For your Chalkmark study, think about the most common tasks your users will need to complete using your website or app when it eventually gets released out into the world and base your testing tasks around those. Keep it simple and don’t stress if you think this may change in the future — just go with what you know today.

Once you’ve completed your studies, have a look at your results and ask yourself questions like: Are both your non-visual and visual IA lenses telling the same story? Is the extra context of visual elements supporting your IA or is it distracting and/or unhelpful? Are people sorting your content into the same places that they’re going looking for it during first-click testing? Are they on the same page as you when it’s just words on an actual page but are getting lost in the visual design by not correctly identifying their first click? Has your Chalkmark study unearthed any issues with your IA? Have a look at the Results matrix and the Popular placements matrix in OptimalSort and see how they stack up against your clickmaps in Chalkmark.

Bananacom ppm
Clickmaps in Chalkmark and closed card sorting results in OptimalSort — are these two saying the same thing?

3. Find out if your labels and their matching icons make sense to users

A great way to find out if your top level labels and their matching icons are communicating coherently and consistently is to test them by using both OptimalSort and Chalkmark. Icons aren’t the most helpful or useful things if they don’t make sense to your users — especially in cases where label names drop off and your website or app homepage relies solely on that image to communicate what content lives below each one e.g., sticky menus, mobile sites and more.

This approach could be useful when you’re at a point in your design process where you have already defined your IA and are now moving into bringing it to life through interaction design. To do this, you might start by running a closed card sort in OptimalSort as a final check to see if the top level labels that you intend to make icons for are making sense to users. When building the study in OptimalSort, do exactly what we talked about earlier in our non-visual vs visual lens study and set your predetermined categories in the tool to match your level 1 labels. Ask your participants to sort the content that lies beneath into those groups — it’s the next part that’s different for this approach.

Once you’ve reviewed your findings and are confident your labels are resonating with people, you can then develop their accompanying icons for concept testing. You might pop these icons into some wireframes or a prototype of your current design to provide context for your participants or you might just test the icons on their own as they would appear on your future design (e.g., in a row, as a block or something else!) but without any of the other page elements. It’s totally up to you and depends entirely upon what stage you’re at in your project and the thing you’re actually designing — there might be cases where you want to zero in on just the icons and maybe the website header e.g., a sticky menu that sits above a long scrolling, dynamic social feed. In an example taken from a study we recently ran on Airbnb and TripAdvisor’s mobile apps, you might use the below screen on the left but without the icon labels or you might use the screen on the right that shows the smaller sticky menu version of it that appears on scroll.


Screenshots taken from TripAdvisor’s mobile app in 2019 showing the different ways icons present.


The main thing here is to test the icons without their accompanying text labels to see if they align with user expectations. Choose the visual presentation approach that you think is best but lose the labels!

When crafting your Chalkmark tasks, it’s also a good idea to avoid using the label language in the task itself. Even though the labels aren’t appearing in the study, just using that language still has the potential to lead your participants. Treat it the same way you would a Treejack task — explain what participants have to do without giving the game away e.g., instead of using the word ‘flights’ try ‘airfares’ or ‘plane tickets’ instead.

Choose one scenario based task question for each level 1 label that has an icon and consider including post study questions to gather further context from your participants — e.g., did they have any comments about the activity they completed? Was anything confusing or unclear and if so, what and why?

Once you’ve completed your Chalkmark study and have analysed the results, have a look at how well your icons tested. Did your participants get it right? If not, where did they go instead? Are any of your icons really similar to each other and is it possible this similarity may have led people down the wrong path?

Alternatively, if you’ve already done extensive work on your IA and are feeling pretty confident in it, you might instead test your icons by running an image card sort in OptimalSort. You could use an open card sort and limit the cards per category to just one — effectively asking participants to name each card rather than a group of cards. An open card sort will allow you to learn more about the language they use while also uncovering what they associate with each one without leading them. You’d need to tweak the default instructions slightly to make this work but it’s super easy to do! You might try something like:

Part 1:

Step 1

  • Take a quick look at the images to the left.
  • We'd like you to tell us what you associate with each image.
  • There is no right or wrong answer.

Step 2

  • Drag an image from the left into this area to give it a name.

Part 2:

Step 3

  • Click the title to give the image a name that you feel best describes what you associate that image with.

Step 4

  • Repeat step 3 for all the images by dropping them in unused spaces.
  • When you're done, click "Finished" at the top right. Have fun!

Test out your new instructions in preview mode on a colleague from outside of your design team just to be sure it makes sense!

So there’s 3 ideas for ways you might use OptimalSort and Chalkmark together in your design process. Optimal Workshop’s suite of tools are flexible, scalable and work really well with each other — the possibilities of that are huge!

Further reading

Learn more
1 min read

The ultimate IA reading list

Within the UX industry, there are myriad terms and concepts you’ll need to understand in order to get your job done. One of the most common you’ll come across is information architecture (IA).

What is it? How do you find it? How do you research it? And how do you create it?

We’ve compiled an extensive directory where you can find authoritative content from information architects all over the world.

You’ll find this resource useful if:

  • You’re new to UX
  • You’re a writer, intranet manager, designer, marketer, product owner or content strategist
  • You want to further your knowledge of information architecture

How to get the most out of this guide:

  • Bookmark it and use it as a learning resource
  • Share it with colleagues, students, teachers, friends
  • Read it and share some of the basics to create an awareness of IA and UX in your workplace
  • Check the health of your current IA with this handy guide.

Read on to learn all the ins and outs of IA including topics for beginners, those with an intermediate skill level, and some bonus stuff for you experts out there.

Information architecture is the system and structure you use to organize and label content on your website, app or product. It’s the foundation on top of which you provide the design.

  • "How to make sense of any mess" - This book by Abby Covert is one of the quintessential introductory resources for learning about information architecture. It includes a great lexicon so you can understand all the jargon used in the IA world, and shows you how to make sense of messes that are made of information.
  • "Intertwingled" - A book written by Peter Morville that discusses the meaning of information architecture and the systems behind it.

Ways of understanding information (and how to design for them)

Information seeking behaviors

  • "Four modes of seeking information and how to design for them" - How do your users approach information tasks? Everyone can be different in their information seeking habits and patterns, so it makes sense to do your research and take a deep look into this. In this article, Donna Spencer explains the four different modes of seeking information: “re-finding”, “don’t know what you need to know”, “exploratory” and “known-item”.
  • "How to spot and destroy evil attractors in your tree (Part 1)" - People can get lost in your site due to many different things. One that’s easily looked over is evil attractors, which appear in trees and attract clicks when they shouldn’t. This can confuse people looking for certain things on your site. This article by Dave O’Brien explains how to find and get rid of these evil attractors using tree testing.

Defining information architecture

Ontology, common vocabulary

The relationship between information architecture and content

Content inventories and audits

  • "How to conduct a content audit" - Before you begin a redesign project, you must perform a content analysis of your existing website or app to get an idea of the content you already have. This article (and accompanying video) from Donna Spencer explains the basics of a content audit, how to perform one, and why people conduct them. As a bonus, Donna has included a downloadable content inventory spreadsheet that you can use for your own project.
  • "Content analysis heuristics" - Before you get started on an information architecture project, it’s a good idea to first analyze what you already have. To do this, you use content analysis heuristics. In this article by Fred Leise, you can learn how to conduct a qualitative content analysis, and what each of his heuristics entails.

Content modeling

  • "Content types: The glue between content strategy, user experience, and design" - A lecture and slideshow presentation from Hilary Marsh at the IA Summit 2016 that explains the importance of creating a good understanding of “content types” so people can all be on the same page. Hilary discusses content lifecycles, workflows, relationships, and includes a handy checklist so you can easily identify content types.

Content prioritization

  • "Object-oriented UX" - When you’re designing a new page, website or app, many people look to a content-first approach to design. But what if you’re working on something that is mostly made up of instantiated content and objects? This is when it’s useful to add object-oriented UX to your design process.

Ways of organizing information

  • "Classification schemes — and when to use them" - How do you organize content? Should it be in alphabetic order? Sorted by task? Or even grouped by topic? There are many different ways in which content can be grouped or classified together. But which one works best for your users? And which works best for the type of content you’re producing? This article from Donna Spencer discusses some of the different types of classification schemes out there, when to use them, and which projects you can use them for.

Research for information architecture

Every successful design project involves a good dose of user research. You need to be able to understand the behavior, thoughts, and feelings of people.

Here’s an overview of the different types of user research you can conduct for information architecture projects.

Testing IA

  • "Tree testing: A quick way to evaluate your IA" - When do you need to run a tree test on your IA? And how do you do it? This article from Dave O’Brien runs through a project he worked on, the research methods his team faced, and the results they received. He also shares a few lessons learned which will serve as handy tips for your next tree test.
  • "Tree testing 101" - If you’ve never conducted a tree test before, our Tree testing 101 guide will fill you in on all the basics to get you started. This guide tells you when to use tree testing, how to set your objectives, how to build your tree, and how to run a study in our tree testing tool Treejack.
  • "Card sorting 101" - A guide we put together to explain the basics of card sorting and how to use this method for information architecture. Learn about the three main types of card sorting, how to choose the right method for your project, and how to interpret your card sorting results.
  • "How to pick cards for card sorting?" - An article on our blog that explains which types of cards you should include in your study, and how to write your card labels so that your participants can clearly understand them.
  • "Choose between open, closed or hybrid card sorts" - A section from our Knowledge Base that explains what you need to know about running different kinds of card sorts. Learn what’s involved with open, closed or hybrid card sorts and which one best suits the project you’re working on.
  • "Why card sorting loves tree testing" - Another article from our blog that explains the relationship between card sorting and tree testing and how you can use the two research methods together for your projects.

Advanced concepts in information architecture

IA in a world of algorithms

Cognitive science for IA

IA at scale

IA and SEO

  • "Information architecture for SEO" - When you’re organizing content on a website, you really have two audiences: people and search engines. So how do you make sure you’re doing a good job for both? In this “Whiteboard Friday” from Moz, Rand Fishkin talks about the interaction between SEO and IA, and some best practices involved with organizing your content for both audiences.

No results found.

Please try different keywords.

Subscribe to OW blog for an instantly better inbox

Thanks for subscribing!
Oops! Something went wrong while submitting the form.

Seeing is believing

Explore our tools and see how Optimal makes gathering insights simple, powerful, and impactful.
OSZAR »