Home > User Experience > ‘Hurry up and wait’ of UX research, Part 2: Research Sessions

‘Hurry up and wait’ of UX research, Part 2: Research Sessions

‘Hurry up and wait’ of UX research, Part 2: Research Sessions


As fast as we are told to run when starting a UX research effort, we are also forced to stop in our tracks in order to wait for something else to happen (before we can start running again). Hurry up and wait. Hurry up and wait. It’s not fun, but may be necessary.

In my last research ops-related article, I talked about how we can capitalize on the wait time between figuring out the recruiting criteria with key stakeholders and beginning to schedule UX research sessions. Hurrying up and waiting also applies when it’s time to do the other 10% of the iceberg — the actual research.

During research days there are most definitely bouts of hurrying up and waiting. The idle time between waits and hurries begs for you to be productive. However, the length of the wait times is unknown. And therein lies the challenge: do you start a big deep-dive task and assume the risk of being interrupted by something more important (like your research effort!), or do you take on something smaller, so you can complete it before your next interruption?

Research (Adamczyk & Bailey among many others) on multitasking and interruptions suggests that we take on tasks that involve lower, relevant workloads during these periods of waiting.

Experiments involving forced interruptions — when an external event disrupts someone and they need to react immediately — have shown that interruptions are more disruptive at higher workloads than at lower workloads. They found that resumption lags — the time between the end of the interruption and the start of the primary task — were greater within tasks than between tasks.

For example, it’s going to take me a lot longer (after being interrupted) to get back to reading a long email from HR about a change in company policy (a high workload task) than it would for me to resume a task, like browsing a group chat for peoples’ comments (a low workload task). I’d probably need to scroll to the last paragraph in the email that I had read before being interrupted to get back on track. The disruption would be greater.

That doesn’t mean we should refrain from doing anything else during this phase of our research. Researchers also found that relevant tasks shortened these resumption lags. Participants were able to process the interruption and resume the primary task more quickly when the task was relevant.

Make use of the idle time between sessions. Rather than trying to get into a deep-dive task, take on relevant tasks with lower workloads. Some potential examples include:

  • Testing out your audio/video and computer equipment
  • Reviewing research-related paperwork
  • Kickstarting your data collection

Testing out your audio/video and computer equipment

If you are using a shared set of resources within your organization for conducting UXR sessions, then it pays to test out the audio/video during your downtime.

Andrew Vannata, UX Lab Manager & Technologist at Bloomberg, emphasizes the need to test out equipment on research days: “In the AV world, it’s a mandatory practice to run your equipment through its paces on production days. It’s no surprise when ‘Mr. Murphy’ shows up unannounced to kill a battery or to implement a new corporate firewall. Getting out ahead of these technical challenges will ultimately maximize your study’s time and productivity.”

Previous users of the shared equipment may have changed its configuration for their own research efforts. Make sure your configurations are ready from prime time.

Check the web conferencing software audio and screen share capabilities. Are you able to hear the person on the other side of the call? Can they see your screen share? Is the recording software working properly? Run a few quick tests with your colleagues. It’s these tiny details that can help you avoid those “inevitable 15 minutes of productivity loss” that many experience at the beginning of meetings.

Have a backup plan for online outages. If you’re testing a prototype that requires online connectivity, consider using a local version instead. HTML files can be installed and hosted locally to eliminate the need for online access.

If you’re using online surveys to collect data, have a paper backup in case connectivity stops working. If your computer stops working, then use a paper prototype.
Always have a backup plan.

Review research-related paperwork

Research-related paperwork is ripe for getting done come idle time!

Relevant? Yes. Lower workload? Indeed! This includes NDAs (non-disclosure agreements), session-specific documents like counterbalance and task sheets, or, any stimuli which are part of the study design.

Everyone loves to read a well-written NDA (just kidding, of course)! During the course of recruiting participants, you will either have had participants sign and send back the NDAs prior to their research session, or (more likely) they will forget to do it and will need to complete this form prior to beginning their session. Make sure your NDAs and any additional consent forms are available for their signature in the session room (or, if the session is at their office, bring lots of extra copies). File away any signed ones so they don’t accidentally try to sign (or see) a previously signed copy.

Ensure your session-specific documents are near your seat and out of the purview of the participant area of the desk. Counterbalance sheets (typically used during usability studies) — documents that have the prescribed sequence of tasks for each participant — and task sheets — documents that contain the task descriptions for participants to complete — should be properly organized. Preferably, stacked in reverse order upside down, outside of the participant’s view. Ensure the task order for the next participant matches the task order on your counterbalance sheet. This avoids any reshuffling of papers during the session which can waste precious time that can be better used for observing the participant’s task performance.

Confirm participant attendance during idle time. Check that your participant is really going to show up for your scheduled session. While you may have reached out to them the day before to confirm, calendars can change quickly. It can’t hurt to send them a quick email or chat message to ensure that they have not been dragged into something else during their scheduled time slot. Confirming attendance for sessions where there is travel involved is especially important. The last thing you would want to do is fly around the world to visit a client and find out that they will not be able to attend your research session.

Kickstart your data collection

Whether you’re having participants complete post-task surveys or you’re jotting down qualitative information about your participants, start organizing the information during your idle time so you spend less time doing it later.

Ultimately all of the quantitative data being collected will need to be summarized (e.g., averages, medians, ranges, etc.). Start summarizing your data in your preferred spreadsheet. Tally up your data and input the respective formulas between sessions. Survey tools like Google Forms even allow you to link to spreadsheets that preserve your summary data throughout the study.

Start designing your coding scheme for your qualitative data. Whether it’s in a spreadsheet or word processing document, there’s no reason to wait for this inevitable step in the process. I’m not suggesting you start deeply analyzing or synthesizing your data. Take a cursory look at the findings and pull out themes and categories into a separate column for reuse across your incoming datasets.

Conclusion

“The wait” between your hurrying can be productive during UX research sessions. Doing tasks that are light in workload and relevant to your study will keep you focused and productive. Leave the heavy-lifting tasks for after research days. Give yourself a break!

References

Adamczyk, P.D., & Bailey, B.P. (2004). If not now, when? The effect of interruptions at different moments within task execution. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: CHI 2004 (pp. 271–278). New York, NY: ACM Press.

Bailey, B.P. & Iqbal, S.T. (2008). Understanding changes in mental workload during execution of goal-directed tasks and its application for interruption management. ACM Transactions on Human-Computer Interaction, 14, 1–28.

Czerwinski, M., Cutrell, E., & Horvitz, E. (2000). Instant messaging: Effects of relevance and timing. In People and Computers XIV: Proceedings of HCI 2000 (pp. 71–76). London, England: Springer-Verlag.

Iqbal, S.T., Adamczyk, P.D., Zheng, S.X., & Bailey, B.P. (2005). Towards an index of opportunity: Understanding changes in mental workload during task execution. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: CHI 2005 (pp 311–320). New York, NY: ACM Press.

Iqbal, S.T., & Bailey, B.P. (2005). Investigating the effectiveness of mental workload as a predictor of opportune moments for interruption. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: CHI 2005 (pp. 1489–1492). New York, NY: ACM Press.

Monk, C.A., Boehm-Davis, D.A., & Trafton, J.G. (2004). Recovering from interruptions: Implications for driver distraction research. Human Factors, 46, 650–663.



Source link