COVID-19’s global pandemic has heavily affected the job of user researchers who interact personally with users to test or validate products. Confinement rules strongly advised against social interactions of any sort, and one would think user research would come to a halt.
It did not. Researchers improvised, and managed.
It felt a bit like the wild wild west. But as researchers we had to improvise and manage. (gif via Giphy).
We want to share our experience with remote Focus Groups, our workarounds, and how they did (or did not) work for our purposes.
Six months past the experience, the word “remote” seems more immediate than it did then. ‘Tho we may be riding to the sunset now, we believe our conclusions might still help anyone changing their protocols to remote-only.
As user researchers, we provide our services for companies who need to evaluate or validate their product among its target users.
One of our projects started in March 2020 and included several Focus Group sessions to discuss the concept of the product — there was not a prototype yet.
In that same month, the entire world began to stay home in order to avoid social interactions.
How could we run the Focus Groups in these conditions?
Remotely, of course. But would it work?
Son of a gon, how could we run the Focus Groups in these conditions? (gif. via Giphy).
Our modus operandi
Short-term projects mean we cannot afford to make any methodological and procedural mistakes. Any decision must be well grounded by previous experience or research.
1. Have others done this before
We made a quick literature review on remote focus groups (search keyword was online – now you’ll find a lot more resources) [1, 2], and started to understand that, as with all methods, there are pros and cons (Table 1):
Table 1: Benefits and limitations of online Focus Groups [as systematized by 3] CCG. 2020.
We began noticing that there was room to change how we did Focus Group as we knew them, especially in terms of timing. Some experiences considered not only synchronous focus groups, where all participants are online at the same time, but asynchronous focus groups as well.
This last modality is set online, on a text based discussion board or forum, thus allowing a greater time flexibility. More importantly, it allows more reflection time before replying to any discussion.
Some researchers have used the remote and asynchronous method with hard-to-reach participants . We focused on this experience in particular, learning that:
- there are peak hours for interactions — before working hours, during lunch breaks or during the weekends — as such it might be handy to have a response schedule;
- this sort of Focus Groups lasts for approximately one week;
- six participants per group is an adequate number.
2. Which tools are there and which ones should we use?
We analyzed and compared three solutions to support our asynchronous online Focus Groups:
The first two options are turnkey solutions for remote user research, while Facebook is a free forum-like platform. To avoid any mishaps during our sessions, we created profiles and analyzed what worked well or not.
We believe this might save you a lot of time, so we will share the comparison table which has led us to our final decision (Table 2):
Table 2: Our research execution keypoints comparison between FocusGroupIt, Collabito and Facebook. CCG. 2020.
We hypothesized that, for what we needed, Facebook would have more advantages than disadvantages:
- most people already have a Facebook account and know how to use it;
- most people have the app on their smartphone, allowing them to answer anywhere, anytime;
- allows the presentation of multimedia content (pictures, videos, forms);
- sends user notifications anytime a new post arrives;
- has good scheduling features;
- has tools such as polls, tags, and the ability to add new options;
- it allows you to use interaction/comments sections and @ mention.
When we communicated to the company that we chose Facebook to conduct the Focus Group, they were surprised… and not very confident. They were concerned the venue in which the product was going to be presented might affect the opinion of the product.
This was a legitimate concern. After all, we never used either of these platforms before, so we reached a compromise. We would run a Focus Group in both of our top runners, Facebook and Collabito, at the same time.
We decided to go one step further and test the method in which we communicated the product. In Facebook, we adopted an informal communication style, with emojis, likes and reactions. In Collabito, as it is a business tool, we experimented with a more formal style, for emails and topic discussion.
Afterwards we could see if the chosen venue had, in fact, any effect on the research results.
3. Preparation and Recruitment
Our Focus Group preparation was set on the following assumptions:
- it would run asynchronously
- it would happen on the course of one week
- it would need a full-time moderator to manage and lead the conversation
- it would have scheduled postings in predefined times
Taking into account these assumptions, we prepared our script. We defined a set of questions that answered something in specific that we would like to know. These questions were then turned into 20 topics of discussion to be posted on the predefined dates and times.
The company validated the script and we started recruiting participants.
Recruitment happened as usual, as the target users had no large set of requirements to fulfill.
Considering the situation, we sent the informed consents digitally to our participants. Importantly, the consents specifically ask them to agree with Facebook or Collabito’s terms of service and privacy.
For the Facebook Focus Group, we offered the participants the choice of using custom made accounts. This was meant to protect their privacy and implied that we created a new account exclusively for this purpose (but none requested it).
4. The Remote Focus Groups
O Focus Group no Collabito teve um total de 8 participantes, 50% mulheres e 50% homens, com idades entre 30 e 39 anos. A maioria dos utilizadores conectava-se algumas vezes ao dia, respondendo a vários tópicos ao mesmo tempo. Isso causou um efeito bola de neve na
The Collabito Focus Group had a total of 8 participants, 50% female and 50% male, between 30–39 years old. Most users logged in a few times a day, replying to a lot of topics at a time. Sadly, this caused a snowball effect in the participation rate. Users ended up having delayed topics everyday, and each day more delayed topics to reply to.
As for the Focus Group on Facebook, we also had 8 participants, 25% female and 75% male, between 18–29 and 40–49 years old. Users either answered immediately after a new post, or answered in bulk in the beginning of the day (7AM-8AM) or at the end of the day.
The next section will tell you the Good, the Bad and the Ugly of each platform (Table 3).
These Focus Groups ended up being quite a hootenanny! But let us tell you more about the Good, the Bad and the Ugly (of each platform). (gif. via Giphy).
- The Good
Participants replied in orderly fashion. On Collabito, participants replied to all questions and sub-questions and didn’t went off-topic, and even provided examples (some with photos) when necessary. For the most, topics were replied in order, and close to the posting date. When this was not the case, we noticed that the participants did, in fact, access the topic, but only decided to replied later on.
We speculate that they may have taken the time to think about the answer before replying to it. In the case of more sensitive subjects, it’s also probable that they decided to wait for someone else to take the first step.
Full Anonymity. User credentials (name, avatar, among others) are created by the group administrator. This allowed us to make sure that no private information was shared without the participants’ consent.
- The Bad
Lack of interaction. There was a lack of interaction between participants. In the end, the Focus Group felt like a remote textual interview, rather than an actual discussion.
This was certainly our main struggle. We do not know if this was related to how the topics were formulated, the tool* itself or both.
We managed to sent follow up emails for both topic launch or moderation reply. At first without, and then with screenshots of the moderator’s reply, which actually ended up improving the response rate.
No App. Sadly, Collabito’s website is not fully responsive. Thus, we recommended the users to prefer to use a desktop device, which may have affected the response rate and the lack of interaction.
No “please specify” option. Collabito has a page type called survey. It allows you to create a topic with a set of polls (single or multiple choice). However, you have no option to allow the input of an “other” value. As we were doing discovery research, we needed polls with these “other” options. When users picked this option, we had to ask by email what that value would be, and only got a reply for half of them.
- The Ugly
Sem emojis 😢. Embora tenhamos usado um estilo de comunicação mais formal, os participantes tinham uma inclinação para o uso de emojis. Apesar de ter um editor de texto avançado, o Collabito não tem suporte para emojis – o que não impediu os nossos participantes
No emojis 😢. Although we used a more formal communication style, the participants had a nick for emojis. Collabito, despite having an advanced text editor, has no support for emojis — which did not stop our participants to get back to the old school ASCII emojis.
Tool stability. Collabito is a very powerful tool, in particular for synchronous focus groups. Yet, the scheduling did not work properly. The posts had to be manually published by the moderator. The email features seemed buggy, and due to this lack of trust, we singlehandedly sent the emails. The text editor had issues when including multiple videos, configuring or editing images or videos, among others — but we couldn’t do much about that.
- The Good
Participants replied in orderly fashion. On Facebook, participants also replied to everything, didn’t went off topic, and provided concrete examples. It seemed that participants “learned” how to answer, either by themselves or by seeing how others did it. By day 2 they all started numbering their answers for an easier understanding.
Easy presentation of items. The scheduling tool worked wonderfully! Furthermore, the @ functionality was key to help us to deepen some topics with a particular participant.
A “please specify” option. Some participants added their own options to polls, making the replies a bit closer to their reality and experience.
Direct link to post. Some participants only answered after receiving a nudge via email. This worked well, especially as you could include a direct link to the missing post.
Communication/Feedback Tools. In general, it was easy to communicate. It was simple for the moderator to provide feedback, even if just with a Like or other.
Surprisingly enough, the style was not as loose as we believed it would be. Yes, it is true that some participants used emojis. We observed however that they were very straightforward and rather serious. They never diverged nor joked a lot, which we were expecting to happen on such a platform.
One explanation might be that the moderator’s screen name was our department’s name. Not being a personal name might have made it look more formal.
- The bad
Lack of interaction. Once again, there was a lack of interaction between participants. Although there were always replies when the moderator asked for a more specific answer, participants simply went inside the group to answer questions and then left.
Post’s order. In Facebook, the presentation order is established by recency of the last comment. This means that it did not respect our original posting order. We believe that it might have been confusing for participants who had to search for the posts left unanswered — our direct emails with direct links were a good workaround.
- The Ugly
Problems with secret group invitations. For some reason, there were some unexpected problems with the invitations. One moderator didn’t see all the users who Liked the Page where the secret group was hosted, so they had to be invited by another moderator.
Question Type. Our questions might have been too open-ended for an easy interaction on a smartphone. Still, the problem was the questions per se, not the platform’s responsiveness.
Table 3: The Good, the Bad and the Ugly of both remote Focus Groups. CCG 2020
Facebook seemed to be the tool with the most positive results from this experience. Also, the results didn’t seem to be affected by the platform nor the communication style.
Y’all may be surprised, but there still is another ugly point in this situation: we did not adapt our questions to the new medium. (gif. via Gifer).
There still is another ugly point in this situation. It so happened that after all analysis and preparation, we skipped a very important detail: We did not adapt our questions to the new medium.
We prepared our script as if for an open conversation… and let’s face it, very few people actually write in open-ended questions. We forgot to consider that our participants would actually be answering on their smartphones. Indeed, the questions with more answers from participants were the multiple-choice ones.
The analysis stage of the Focus Group might have presented the biggest advantage of all.
Repeat after us: No transcribing
Yep, since everything was in written form already, we simply had to export (in the case of Collabito) or copy/paste (in the case of Facebook) and start our content analysis.
6. Lessons Learned
Facebook is a good platform for Focus Groups. Be as it may, it turned out to be surprisingly easy to accommodate our needs in this platform.
Tracking tools. It was very helpful to have tracking tools that marked who did or did not answer to assess the general status of discussion.
Email templates. Having email templates for nudges and other reminders was also a huge help. Preparing beforehand templates, with different communication styles, highly sped up our follow-ups.
Question copy. We need to adapt the question type to the platform. Unfortunately we failed to adapt our script to this answering style. We had to have more questions, shorter, and with more multiple answers.
Personalized contact. All our contacts included the first name of the participant. This created some proximity with our participants. We managed to receive replies explaining why they stopped answering: lack of time, did not identify with the product, etc.
Y’all our participants… you’re awesome. (gif. via Giphy).
Conclusion – Would we repeat?
After all these lessons learned, we would certainly say… it depends (of course!). Depending on the Focus Group’s purpose, Yes, we would repeat the experience.
The Recruitment. As the Focus Group was remote, we had more participants than if it would be face to face, from a larger variety of cities and even countries.
The Results. Most participants replied to everything even if some users went into more detail than others.
The Analysis. As it was all written into text, we quickly jumped to the analysis stage.
The Outcomes. The outcomes of these particular Focus Groups were very positive and enriching, and the company was happy with the results.
 Stancanelli, J. (2010). Conducting an online focus group. The Qualitative Report, 15 761–765. Retrieved April 11, 2014, from http:// www.nova.edu/ssss/QR/QR15-3/ofg2.pdf
 Williams, S., Clausen, M. G., Robertson, A., Peacock, S., & McPherson, K. (2012). Methodological reflections on the use of asynchronous online focus groups in health research. International Journal of Qualitative Methods, 11, 368–383
 Lijadi, A. A., & van Schalkwyk, G. J. (2015). Online Facebook focus group research of hard-to-reach participants. International Journal of Qualitative Methods, 14(5).
* Collabito has no automatic refresh, push notifications or last read message indicator features. This makes it harder for the participants to discover and find what they have to reply to.
Article also available at: MEDIUM
Joana Vieira Usability Analyst @CCG of D.I.A PIU
With a Masters in Experimental Psychology (Human Memory) by University of Minho, Joana is currently a PhD researcher in Ergonomics at the Faculty of Human Motricity of the University of Lisbon, on the topic of auditory warning signs in operating rooms. She is also a specialist in the working group 3 – Controls, displays and warning location – of ISO TC22 SC39 – Ergonomics in Road Vehicles.
His research interests focus on the usability of interfaces in all stages, from design to user validation, using various methodologies such as guessability, Kansei engineering, and classic usability tests.
Marina Machado Developer e UX Researcher @CCG of D.I.A PIU
Marina has a degree and a master’s degree in Computer Engineering by University of Minho (2015), with a certificate in UX/UI Design fundamentals by California Institute of the Arts.
His research interests focus on user-centered development approaches, ensuring that the user’s voice is heard, helping to create useful, usable and enjoyable interfaces.
The Domain of Applied Research PIU focus on the development of studies focused on the person/human being and assist in the creation of new products that enhance their adaptation, for better usability and comfort, as well as health/rehabilitation, safety, and entertainment.