Home Work Resume
John Britti

Participatory Whiteboarding

Problem Space

How can we leverage digital tools to make participatory design easier and more effective?

Solution

A digital whiteboarding toolset that helps users more easily engage with complicated whiteboard interfaces and helps researchers quickly collect data, organize it, and return to it later without losing context.

With the wider adoption of digital whiteboarding tools like Miro and FigJam, design researchers have the opportunity to reimagine the spaces where design takes place. No longer must designers and researchers keep their design work cloistered away from their users; the digital whiteboard can allow everyone to engage together and immerse themselves in data.

Our team worked with professional design practitioners both in industry and academia to imagine a whiteboarding toolset that could help bring about this new paradigm of digital participatory design.

These tools help design researchers better engage with participants during sessions and immerse themselves in the resulting data.

Timeframe

  • August 2021 - May 2022

Team

  • John Britti
  • Sean Perryman
  • Wenrui Zhang

Tools

  • User Interviews
  • Observation
  • Journey Mapping
  • Figma

Tags

  • Participatory Design
  • Whiteboarding
  • Professional Productivity
  • Figma

The Toolset

Here are the final prototypes for our whiteboarding toolset imagined in the context of Miro. You can view an interactive version here.

1

Evidence Collection Pipeline

Integrating auto-transcripting with quick and intuitive interactions this tool helps researchers capture user data during interviews and keeps it organized so they never lose sight of the original context.
Watch the demo
2

Layers

Layers are an essential productivity and organization feature in most design apps. Our prototype imagines how they could be integrated in to whiteboards and enhanced to facilitate design research.
Watch the demo
3

Question Prompts

This tool empowers researchers with the ability to better structure their user sessions with questionnaire functionality and helps participants navigate complex whiteboard interfaces.
Watch the demo

Timeline

This project largely followed a standard double diamond design process with a more lean UX cadence during the ideation phase.

Research

Because our problem space was more speculative than reactive—that is, our goal was to imagine new approaches to participatory design, not directly address some specific point of failure—we conducted a mixed-method research approach designed to both identify the challenges of participatory design and collect novel approaches to participatory design that we could bring into our ideation.

We gathered a whole heap of insights, so I'll only share a few from each method to show what kind of results we were getting.

📚 Literature Review

To get a grasp on participatory design, we consulted academic papers for both information on its general nature and novel methods that academic practitioners deployed in the field.

Literature Review Insight #1

Designing With

Participatory design is about designing with, so participatory methods need to emphasize the autonomy of the participant in research.
Literature Review Insight #2

Transcription

Transcription is a powerful tool for capturing qualitative data, and letting participants explore it may reveal insights the researchers may have missed.
Literature Review Insight #3

Layered Elaboration

An effective tactic to ease participants into the design process and get the best results is "layered elaboration": taking the participant through the same activity multiple times and have them iteratively expand on their answers.

⚖️ Comparative Analysis

In addition to reviewing academic sources, we also aggregated and analyzed several professional research and design tools to discover what features are currently being offered and what gaps still remain.

Comparative Analysis Insight #1

Expectations

Our comparison matrix served as an effective bar for user expectations; later when we started designing, we would rely on these findings as a benchmark.
Comparative Analysis Insight #2

Integration

Services overall covered most of a UX practitioner’s needs, but individually, they often lacked features for a full UXR process, so researchers would likely need to jump across platforms to perform their work.
Comparative Analysis Insight #3

Opportunity for Journey Management

While there were several journey mapping services available, none seemed focused on journey management. We thought the concept of a continually updating user journey might offer more opportunities for user participation.

👁️ Observation N=2

We had the opportunity to shadow professional design researchers at NCR while they performed participatory journey mapping with customers.

While we all had some experience running participatory methods from our classes, this gave us a look at how these methods work in the real world.

Observation Insight #1

Trouble Keeping Up

Even note-takers would occasionally fall behind when creating notes and miss some detail of the user's journey.
Observation Insight #2

Structures in Digital Whiteboards Can Be Difficult to Edit

When making adjustments the journey map in Mural, researchers encountered friction with the interface and the structures they had created. Extending a phase meant selecting every subsequent phase and moving them.
Observation Insight #3

Aggregation is Onerous

Once the interviews were over, it was another several-hour process to clean and aggregate the data.
Observation Insight #4

Participants Wouldn't Engage with the Board

Participants largely didn’t interact with the journey map even with prompting from the facilitators. They would respond to questions relating to their journey, but rarely would they reference or critique elements on the whiteboard.

💬 Semi-Structured Interviews N=7

The most fruitful research endeavor we pursued, however, were 7 semi-structured interviews with UX practitioners. 5 of which were peers at Georgia Tech and 2 of which were colleagues at NCR.

Each interview lasted roughly an hour, and we asked each participant to bring (or reference, if they no longer had access) a design artifact they co-developed with participants.

Semi-Structured Interviews Insight #1

Confirming Other Research

Though redundant, it's important that the personal experiences of our interviewees confirmed our findings from other methods, like how participants have a hard time engaging and how numerous the touchpoints are.
Semi-Structured Interviews Insight #2

Taking All the Notes

Getting participants to engage with the design process is often enough of a challenge, so facilitators ae often the only people interacting with the whiteboard, preferring to lessen the cognitive load of the participant.
Semi-Structured Interviews Insight #3

Writing in the Margins

When conducting PD sessions with users within whiteboarding spaces, researchers would often want to take notes about their users on the board itself without users seeing the notes, so they resort to taking notes in the far margins, potentially losing sight of the context of the note.
Semi-Structured Interviews Insight #4

Results are Very Messy

One of our interviewees described participatory design results as looking a little like The Obliteration Room. Essentially, results from these sessions often cluttered and need extensive cleaning to make usable and presentable.
Semi-Structured Interviews Insight #5

Data Structures on the Fly

Often times, researchers would come into a participatory design session with a particular structure in mind, but would find that this one structure wasn't adequate to describe what their participant wanted. For instance, the phases of a journey map may be too rigid, so maybe they should be a Venn Diagram instead.
Semi-Structured Interviews Insight #6

Assigning Tasks

To ensure that participants are engaged particularly in multi-user sessions, researchers would assign each person a unique task they could focus on. Clear activities help participants engage with the design process.

🗺️ Participatory Design Journey

While we had individually organized all of our data, we wanted to summarize together as much of it as possible so we could most effectively contrast individual insights and identify where we could best intervene. To do so we constructed a large journey map, which I’ve summarized here, to identify where the most fertile soil was for a design intervention

Given the prevalence of problems throughout the design research journey we knew we would have to focus in on a smaller chunk, or else we'd spread ourselves too thin.

All of these phases had good enough evidence to support design work, but we ultimately chose the facilitation phase because it seemed to have the most pressing pain points and most opportunities for intervention; however, we wouldn't be strict if our ideas spilled into preparation or synthesis stages.

🗂️ Design Requirements

With our focus chosen, we generated design requirements to guide our idea generation.

Design Requirement #1

The System Should...

Support constructing data structures that are flexible and produce good insights.
Design Requirement #2

The System Should...

Allow the UX practitioner to record the thoughts of participants mid-session.
Design Requirement #3

The System Should...

Support efficiently translating those thoughts to coherent research artifacts.
Design Requirement #4

The System Should...

Support organizing research artifacts to facilitate synthesis.

Ideation

Ideating in Georgia Tech's Industrial Design lab

At this stage, we focused on generating as many ideas as possible based on user needs and design requirements. First, we timed ourselves to generate ideas on our own. Then we swapped our “idea cards” and built on top of each other’s ideas.

While going through all of the initial ideas, we found that there were some recurring themes, so we grouped the ideas that could fit well together into broader categories, identified key features that would best address user needs, and consolidated them into six design concepts.

✏️ Sketching & Feedback N=3

Two of our sketched ideas

With our 6 ideas, we sketched them out and gave them brief descriptions. We then recruited two peers from the HCI program and scheduled 45min feedback sessions where we asked them to identify positives, negatives, and opportunities with each idea.

Rose, bud, thorn activity for our 6 ideas

📦 Concept Reconfiguration

The results of these sessions showed that no singular idea was perfect, and we ended up using the feedback from this method to take apart each idea and repackage certain features as a more limited toolset.

Idea #1

Breakout Boards

Breakout boards allow facilitators to group and “send” participants to their own work spaces, similar to breakout rooms in Zoom. Participants will then see only their specific space, with facilitators having a global view of all spaces.
Idea #2

Evidence Collection

Evidence collection allows for better integration of raw data into a whiteboard for the purposes of analysis by integrating transcription features and image uploads into the whiteboarding space.
Idea #3

Layers

Layers brings the familiar feature from other design apps to whiteboards with the addition of privacy. Owners of the whiteboard can now better organize content, and mark things up without disturbing them, while keeping sensitive information hidden.
Idea #4

Question Prompts

Question prompts enables researchers to create simple, structured activities within a whiteboard that are tied to elements on the board, giving participants an easier handle on a whiteboard's complex interface.

Design

With our new set of ideas in hand, we began designing wireframes for testing. Rather than building out prototypes over a longer period of time, then having isolated testing periods, we adopted a more lean UX approach where we tested each prototype weekly, regardless of where it was in development. This process afforded us two distinct benefits:

  1. We were able to respond to participant feedback immediately and make larger conceptual pivots while the prototypes were still "hot."
  2. We we able to elicit a gradient of feedback for our prototypes from more conceptual when they were lower fidelity to usability when they became high fidelity.

🔄 Lean UX Process N=9

Every week for a month we scheduled 2-3 30 minute feedback sessions with UX practitioners (7 in industry and 2 MSHCI students). During these short sessions we had participants evaluate a subset of our tools by interacting with them on Figma and sharing their thoughts aloud as they did so.

Early prototypes were little more than concept wireframes with accompanying descriptions and barely any interactions.

At this stage we mainly gathered concept feedback, asking participants if they saw value in the idea, to recall (or imagine) situations where an idea would have been helpful, and to provide feedback on how the idea could be further realized.

Results from early on lead to larger swings in development. For instance, early feedback suggested that breakout boards while potentially helpful in certain circumstances, was too niche a problem and too slight a solution to warrant continued development.

As the ideas developed, the prototypes became more elaborate with multiple interaction flows and the feedback we sought became more focused on granular details and usability.

You'll notice in the images that we're relying more on a fake parking ticket scenario to give our users some context as they interact with the prototype.

Evaluation

After a month of prototyping with users, we took 2 more weeks to re-review feedback and prepare "final" prototypes that could be evaluated by our peers.

📋 Task-based Usability Evaluation N=5

To evaluate overall usability we conducted task-based assessments for each of our features, using a think-aloud protocol, with five of our peers in the MS-HCI program. We also included in our evaluations an A/B test for our layers feature to assess visual design variations for communicating note privacy and layer position.

Evaluation results for Evidence Collection
Evaluation results for Evidence Collection

We organized the results in Figjam according to feature and flow, and rated notes according to severity from 0-4. Severity ratings were assigned by team members and determined based on considerations such as the centrality of an identified problem to the overall functionality of the system, how many users identified a given problem, and the feasibility of solving the problem.

Select issues identified with Evidence Collection
Select issues identified with Evidence Collection

Overall impressions were positive, with participants expressing enthusiasm towards the ideas, but of course, there were still issues that needed addressing. In the interest of brevity, I'll just describe some of the more severe problems we identified in Evidence Collection. Here you you can see that users had problems identifying icons at a glance and several participants noted readability issues with our text.

💌 Responding to Feedback

We had some time before our final presentation, so we decided to respond to some of the feedback we received. We couldn't get to everything, but using our results, we triaged the more pernicious issues and a few lower priority "quick wins."

These changes can be seen in the demo videos and interactive prototype here.

Conclusion

📏 Limitations

My greatest regret and likely the biggest limitation in this project is that we never constructed a fully functional prototype that we could test in a more realistic setting. While our prototype robustly captures the UI flow of each concept and adequately demonstrates what our toolset does, it doesn’t let us evaluate how it actually changes the process of a participatory design session and indeed if it does improve anything.

We may have received positive feedback about the concepts and even had participants imagining concrete situations where the features would be helpful, but these data aren’t full confirmation.

🚀 Future Directions

I believe that there's much more work to be done here. My dream for the future participatory design is one where the barriers between UX professionals and the users they're trying to help are as low as possible.

Much like a physical room where design and design research occur, these digital tools are providing designers with a means of collaborating in a shared space, of bringing end-users into participatory design practices, of housing various design artifacts in a single, persisting place. As work increasingly becomes physically disbursed and remote, these tools allow design work to occur across greater distances.

They also improve the access of end-users to design spaces by removing physical barriers and making participation more convenient. I view this “design room” metaphor for whiteboarding tools as being rich in design opportunities for expanding the functionality of these spaces.