#
Hasso-Plattner-Institut
 

18.09.2019

Holger Rhinow Programm-Manager bei der HPI Academy

The "Escalation with Frameworks" Exercise

Holger Rhinow, Program Manager HPI Academy

In this article I describe the so-called “Escalation with Frameworks” exercise. I came up with this exercise while working with an innovation team from a large manufacturing company in Europe in 2016. The team at the time was exploring fairly new topics and had to handle a lot of user research data — more than they would have typically worked with.

I designed the exercise for the beginning of our data-analysis-phase. It turned out to be a nice warmup-exercise that took 90 before we started on the actual analysis. Since then, the exercise has been adjusted and applied in several trainings and with other teams.

Why teams need training in analyzing user research results

After my initial rounds with “EwF” in projects, I first applied the exercise in trainings where we compare similarities and differences in agile approaches, such as design thinking and Scrum. Here, “EwF” highlights how a design thinking approach in early innovation projects can differ from a more structured development-focused approach such as Scrum.

While Scrum is also a user-centered and explorative approach, Scrum teams favor different frameworks to discuss and analyze their data, e.g. a product vision board, or a product backlog. These frameworks are being taught, and repeatedly created and re-visited in designated project phases. To some extent, the application of these frameworks give structure to a Scrum project.

In design thinking, we see different frameworks that typically tend to be more optional in nature for the overall process. A design thinking phase in innovation projects may or may not comprise certain frameworks. Oftentimes the decision to use a certain framework in design thinking is ad-hoc and not planned. Examples of these non-binding frameworks in design thinking are personas, empathy maps, feedback grids, and points of views.

In Scrum, standardized frameworks seem to work well because the incoming data sets are often similar. A Product Owner (PO) typically arranges the goals of a project through backlog items ranked in a product backlog. Both the PO and the team then confirm what the product experience should be like based on so-called user stories. None of these artefacts predefine how the experience will be created, but they define a formal structure and the workload becomes visible to everyone in the team.

When I am hired to work with innovation teams, the teams aim to define set up for an agile development process. However, they first want to apply design thinking to openly explore a topic with no specific solution in mind. In this process they are likely to absorb various forms of unrelated data, e.g. user statements that are not ranked, observed tensions and surprises in user behaviour, contradictory test results, as well as contradictory statements by experts and in forecasts. These data do not have a rigorous formal structure. They can be concrete or abstract. Confronted with this colourful mixture of data, teams need to apply frameworks in a more situational approach, because:

 

Standardized frameworks sometimes absorb unrelated data successfully, sometimes they don’t.

 

The explorative nature of design thinking becomes apparent in these moments, when teams are confronted with a mass of unrelated data they have gathered. There is no formula or algorithm that explains how to analyze these data under certain conditions.

Confronted with the complexity of their own data, teams tend to feel the urge to cut corners and jump to conclusions about their research as fast as possible. They usually look out for a few data points that stand out and then want to immediately start discussing how to proceed with these information. I first wondered why people act this way, because they spent so much time acquiring the information and then were too restless to enjoy using them. Sometimes project teams spend months just to plan their user research in addition to their daily task. Once they have a rich data set to work with, they cannot seem to wait to leave it behind with just few conclusions drawn about their research. If it takes only minutes longer than initially scheduled, the analysis already feels slow and painful to a lot of teams. The reason I found for this self-imposed pressure is the fact that the analysis of data feels so explorative and circular in nature that teams immediately realize the potential frustration they could get into. A team can very likely analyze data without making any visible progress for quite some time. In most companies the notion of spending time on something without making visible progress is equal to wasting time. Teams are highly aware of this equation. Nevertheless it is a necessary tradeoff if you want to play the game called innovation.

I therefore always tell my teams that the analysis of data needs to be a (if not the most) playful part of their project work. Much more than e.g. conducting the user research itself which is actually a real craft that is based on techniques and skills to be mastered after years of practice. The analysis of unrelated data demands a lot of initial guessing and moving from one data point to another. It is playful in the sense that it should allow for novel and unusual arrangements of data that oftentimes make no sense until they do. In its best moments, teams perceive it as a truly creative act. Most teams are not used to this kind of work in the context of their daily routines, however, which is why I train them for it.

My goal with EwF is to warm up teams for analyzing their user research data in a more playful way. In my workshops I usually start the exercise with an open interview to create a rich bundle of raw data that everyone shares. However the exercise is also applicable to real project data — as long as everyone in the team has access to that data pool.

As a side effect, teams become familiar with several frameworks that often used in early innovation projects.

A selection of frameworks_Holger Rhinow
A selection of frameworks that I often apply in the analysis of qualitative user data (Creative Commons (4) by Holger Rhinow)

The design rationale of frameworks

Frameworks are powerful tools in teamwork sessions. They help us to first play with data instead of jumping to superficial conclusions. Frameworks do not occur by nature. Frameworks are tools that push teams into externalizing their thoughts on a given topic.

The frameworks we often use in innovation projects are illustrated below. With the exceptions of personas — all frameworks are being used in the exercise that I describe in this article.

As I mentioned before in an article about frameworks, the best framework is the one you as a team will design while you are in the thought process. It is the framework that allows you to let your data respond to you in an inspiring way. A meaningful framework illuminates gaps and opportunities in and in-between data points.

This is easier said than done. That is why I came up with this exercise in the first place.

The three levels of mastering Frameworks

The exercise reflects upon my idea of an educational approach that is illustrated below. In order to select between frameworks or design new ones, you first should make yourself familiar with existing frameworks.

EwF is based on the above-mentioned educational model. Teams will enter the exercise with given frameworks (level 1). They will work their way through four frameworks and reflect the most appropriate for their user research data afterwards (level 2). If there is time left, they will finally design a new framework that adds a completely new perspective on their data set (level 3).

In order to complete the exercise, they go through five steps.

Learning Levels_Holger Rhinow
Learning Levels in applying Frameworks (Creative Commons (4) by Holger Rhinow) in applying Frameworks (Creative Commons (4) by Holger Rhinow)

Step 1: Nugget Framing — a simple intuitive decision

A team will first write down all statements and quotes that stood out in an interview. We usually use sticky notes and whiteboards so that everybody can see all data at all times. Writing down a statement also means deciding against other statements. To make sure that a team does not leave too much food for thought in the set up, I always encourage every team member to write down their favorite statements in silence. That way, they are not biased by each other’s preferences too early in the process. As a tradeoff, we create a bit of redundancy by creating doublings. I am totally comfortable with this tradeoff, as it is a small effort with almost no time wasted because of the parallel writing. One could argue that doublings are not even real redundancy but more of a new information in and of itself ; if everyone else ends up writing down that one particular statement independent of each other, then maybe this already says something about the statement (or the team).

 

Once, all data are randomly arranged on the whiteboard or a wall, I hand out a framework called the nugget frame — each team member gets one. Their first objective with this framework is to frame the one statement most interesting to them. It is a super intuitive rating system, ranking a single data point over every other data point.

Team members can intuitively make a choice without having to have an explicit reason for it. However, an interesting cognitive phenomenon happens just a few seconds after all the nugget frames are positioned: Members are now likely to justify their choice to their team. Karl Weick describes the act of “sensemaking” with a question that describes what happens here:

 

“How can I know what I think before I see what I say?”

 

Positioning a nugget frame before discussing a choice describes the act of saying “I choose this data point A”. The choice triggers a realization that can be shared among the team. Data point “A” can be something highly relevant or problematic. It can be inspiring in a way that it already sparks the teammember’s individual creative muscle. In some cases it already refers to a controversial aspect from the interview (e.g. a contradiction between what the interviewee said and what that person actually does).

After every team members has placed a nugget frame and shared their sensemaking process, the team moves on to a more complex framework called a job-to-be-done-statement or a job story.

nugget framing_HPI Academy
A team member is nugget framing a statement that stood out from her research © HPI Academy

Step 2: Jobs-to-Done-statements to interpret causality

Jobs-to-be-done-statements (JTBD) refer to the idea that human beings behave differently according to the situation they are in and the goals they aim for in that situation. There is a debate as to whether humans actually aim for outcomes or progress in general. My understanding is that a JTBD-statement describes a human beings behaviour in a specific situation with a specific intent in mind. Thereby, humans use a tool to get a job done, not because they want to own the tool.

A famous example is the notion that homeowners do not buy quarter inch drills because they need drills or feel the urge to own them, but they buy them because drills are tools to do the job of drilling a hole in the wall — better than a spoon, for a example. Buying a drill is not the result of a psychological preference but part of a causal change in a homeowner’s life. There is a situation that requires a hole in the wall because the homewoner finally wants to hangup a picture. A screw can hold that picture and it demands a hole. A JTBD statement refers to this causal chain:

 

Causal chain: Drill → Hole → Screw → A Hanging Picture

 

Why is this relevant? The causal chain illustrates that new solutions can enter the homeowner’s life from different angles. An innovation team can design a new drill because a homeowner needs it. However a team can also follow the chain and reconsider how holes be put into walls without drilling them. In the end, the homeowner does not need a drill. Or a team reconsiders that screws are not necessarily perfect tools to hang pictures on the wall, it may be some kind of strong tape or glue or a different picture frame or a different wall.

The Job-to-be-Done perspective allows us to see qualitative data in the context of causality. Someone applies some kind of behaviour in order to achieve something in a given situation. That behaviour can be explained through context, not primarily through psychological preferences.

For the exercise I often apply Alan Klement’s version of a Job-to-be-Done Statement or a Job Story:

 

“When I am (situation), I (action), so that (outcome)”.

 

With this framework in their hands, team members first scan their nugget-framed data points and write them down as a JTBD-statement from a user’s perspective — the “I” in that statement refers to whoever said or did something, not the team. For example “When I am alone in my apartment (situation), I watch the latest Netflix specials (action), so that I feel entertained (outcome).”

Not all data points can be integrated into JTBD-statements for several reasons. Sometimes these data points neither describe a situation, nor a behaviour or an expected outcome. If a user simply said “I don’t like intransparency” and the team does not have a picture in my mind what that statement means in that user’s life, an interpretation can only be made by guessing. Usually at this point, teams realize that they missed a lot of valuable information from their interviews because they did not accurately asked for a better understanding of the scenes that their interviewees were refering to.

Usually two or three JTBD-statements in the exercise are enough to do the job. The active interpretation of qualitative data as Jobs-to-be-Done-statements demands an open discussion by all team members. Jobs to be Done already indicate fields of opportunities for new solutions, yet they still focus on single data points from individual interviews without putting them in a larger context.

An alternative version to a Job-to-be-Done-Statement can be a User Story that would also describe the causality of a person’s action in a given situation with a specific goal in mind. User Stories however are based on the idea that a certain type of user or a certain role has an impact on their behaviour:

 

“As a (role), I want to (action), so that (outcome)”

 

I am not dogmatic about this. Both frameworks work well in defining causality in user behaviors. Important is, that team members begin to actively connect the causal chain of a user’s behaviour or action in the context of a situation and an expected outcome. This can change the perception of data points dramatically. Seemingly uninteresting data points can now appear to be more relevant because they refer to a relevant situation, or a relevant tool, or a relevant outcome.

Step 3: A User Journey to describe a chronology

In step 3, teams will have to combine various data points into a single framework. A user journey is a framework that describes a chronology of actions and emotions (e.g. a user’s daily work routines or a sequence of actions to plan an important event). At its core the user journey is a horizontal timeline. Qualitative data are placed on that timeline to illustrate events that happen sequentially. Teams may also be able to add more information, for exmample by describing the user’s emotional status over time.

User Journeys are a great framework to prioritize multiple data. Gathering multiple data is quite demanding but the given chronological order provides enough orientation to focus on the order of events and then evaluate their relevance for a user. At the same time, it is important that the timeline does not comprise data from different users that do not have similar journeys.

A team member organises user events_HPI Academy
A team member organises user events in a chronological, horizontal order and evaluates their relevance vertically © HPI Academy

Step 4: 2x2 Matrix — a framework for pros

In step 4, teams again shuffle their data by drawing an empty 2x2 matrix on a whiteboard. Once filled out, a 2x2 matrix positions data in relationship to each other, e.g. distinctions or power relations: “A” is cheaper than “B”.

But teams first start with a blank matrix. This means, it is unclear what kind of data should go in the framework because the axes lack any description. The teams will have to solve this puzzle by first randomly arranging data on the board. At this point a lot of teams reach level 2 in our educational model. They select from previous frameworks to apply patterns they have already seen. For example, they may arrange their nugget frames in the matrix. Or they define how different JBTD-statements can be put into relationship to each other.

The challenge in step 4 is that there may be no relationship between randomly arranged data. If teams realize this situation, they need to reach a consensus on what it may be. This can entail adding new data and removing old data from the board.

Multiple relationships can become apparent. Some of these will be more inspiring than others. Teams may see structural differences between problems. They could realize new correlations between one factor (e.g. the level of income of different users) and another factor (e.g. the amount of hours they spent on coffee shops). Also, 2x2 matrices may illuminate causalities, as well.

2x2 matrices can go in any direction. That is what makes them so hard to apply in the first place. Teams need to stay open for ambiguity while playing around with raw data. Sometimes great 2x2 matrices appear that can be applied in other projects, as well. Nevertheless I always encourage teams to define their own 2x2 matrix based on their individual qualitative data.

2x2 matrix_HPI Academy
A 2x2 matrix that illustrates an interviewee’s perception on social media content as either beneficial or risky depending on the public availablity of the content © HPI Academy

Step 5: The final escalation

Step 5 is optional. A lot of teams simply run out of time during step 4 which is also fine. If teams are quick in their decision processes I give them a chance to enter Level 3 — the design of a new framework that would shed a new light on their data. At this point, teams have already played around with their data pool enough to know what is in there and what is not.

A new framework starts with nothing but the first data point or a relationship between two data points. New things can be added from there or removed again. In its best moments, the situations escalates to unforeseen arrangements of data.

I have seen creative solutions such as specific venn diagrams, data maps, data wheels and new narratives. In the end, a great framework is one that adds a new perspective on raw data. A perspective that has been overlooked in the first place but turns out to be inspiring for the team.

 

Conclusion

Since there is no formula to analyse qualitative data we have no other choice but to play with these data until something appears that resonates with the team. EwF is inspiring teams for endless possibilities on how to look at data from user research. It is fast training and it leads to surprising results — you could say it leads to escalation in a fun way.

PS: Let me know if you need further support in conducting the exercise. Also, feel free to share your own examples.