Researcher Affinity Mapping with post-it notes

Using Affinity Mapping for Grounded Theory Analysis

Based on the paradigm of Grounded Theory, our aim was to develop concepts/theories that developed inductively from the observation data. In order to make sense and be able to draw out concepts and theories, the data goes through a process of coding. To assist with coding we employed the method of Affinity Mapping.

Affinity Mapping

Creating in the 1960’s by Jiro Kawakita, an anthropologist, Affinity Mapping is a process used to organise large amounts of data (concepts, ideas, and issues) into affinities (or groups) based on their relationships to one another. Adopted within design ethnography, this is a useful inductive process for making understanding out of relatively extensive sets of data.  

There are 4 basic elements to Affinity Mapping:

·      Step 1 - Generate ideas – Extract Concepts

·      Step 2 - Shuffle and Display ideas

·      Step 3 - Sort ideas into groups

·      Step 4 - Create header cards – Top level descriptions of Concepts

In reviewing our observation data, we extracted concept(s) from each of the headings of our data collection framework and wrote them on post-it notes.  These notes were shuffled and individually we began to group similar post-it-notes together.  This was carried out over several days for each of the topics. Then we talked through the shape of the map, looking at patterns, moving notes to different areas and discussing the themes arising. We used different colours to label the groups. We used these group headings to form a codebook to use for a code analysis of all of our captured data.  This process identifies where particular issues are present within the data and gives us a confidence that particular theme is significant within the data set.

Interrater-reliability scoring

To improve on the reliability of using these codes as a way of representing the data we carried out a series of interrater tests.  Borrowed for quantitative approaches to data coding, Inter-rater reliability is the degree of agreement among multiple researchers applying the same codebook to the same source material.

Two researchers identified which codes were present in a series of excerpts of the data and the interrater agreement gives a score of how much consensus, there is in these simultaneous codings.  This helped us to judge where there is misinterpretation possible from the code definitions and how reliable the application of those codes are to the data. By reiterating this process we were able to address areas of concern. 

The combination of affinity mapping with an interrater-supported coding approach helped us produce a refined theme set grounded in the data and a structured dataset in support of those themes.