RESEARCHING THE REAL WORLD



MAIN MENU

Basics

Orientation Observation In-depth interviews Document analysis and semiology Conversation and discourse analysis Secondary Data Surveys Experiments Ethics Research outcomes
Conclusion

References

Activities

Social Research Glossary

About Researching the Real World

Search

Contact

© Lee Harvey 2012–2024

Page updated 8 January, 2024

Citation reference: Harvey, L., 2012–2024, Researching the Real World, available at qualityresearchinternational.com/methodology
All rights belong to author.


 

A Guide to Methodology

3. Observation

3.1 Introduction
3.2 Aspects
3.3 Methodological approaches
3.4 Access
3.5 Recording data
3.6 Analysing observational or ethnographic data

3.6.1 Introduction
3.6.2 Electronic qualitative data analysis packages

3.7 Summary

Activity 3.6.1

3.6 Analysing observational or ethnographic data

3.6.1 Introduction
Ethnographic research invariably leads to the collection of an enormous amount of detailed accounts, quotes, examples, anecdotes and so on. The production of a finished ethnographic report requires a selection from this detail. The choice of material is guided by the theoretical framework (or angle) that has emerged during the study.

A major problem that observation researchers face is how to deal with the vast amount of material. How can the data be sorted, coded, organised, and ultimately reported? There is often so much material that the researcher is overwhelmed by it and does not know where to begin or what sense it all makes. Even if the researcher has a clear idea of what the data is pointing to he or she may not know how to organise it to present it to its best advantage.

Probably the best way is the so-called 'pile building' approach. (Harvey and MacDonald, 1993). This involves several stages and is a useful general approach whatever the purpose of the research. This is much easier and cheaper to do if you have used a word-processor to write up observations. There are also software packages that assist the process (Section 3.6.2).

Stage one is to copy all the data. Leave the top copy alone and use it only for reference purposes.

Stage two involves reading the data 'vertically'. That is, field notes are read chronologically from start to finish. They may be read several times so that the researcher has a good idea of what is in the notes. You may reread the whole thing as a block or break it up by reading about each individual or subgroup in the study in turn. Whichever way you do it, make sure that you are familiar with the contents of the field notes before you go on to the next stage.

Stage three involves identifying major themes that seem to recur throughout the data and have a bearing on the theoretical concerns. Make a not of these based on your 'vertical' reading.

Stage four involves going through the data and dividing each day's field notes into sections that deal with the particular themes. On each segment, note the original time and place of the observation, its precise location in the top copy of the recorded observations, the people involved, the sort of activity going on, and the theme or themes that occur in the extract. Where an extract involves more than one theme then you will need to copy the section so that you have a separate version of the section for each of the themes. Note that some software packages can help make this lss time consuming.

Stage five is to read the data 'horizontally' by themes. To do this, some ethnographers literally cut up their material and arrange it, according to themes, in piles (on the floor). This is why the approach is called 'pile building'. The process can equally well be done electronically using a word-processor (as we saw with Filby's field notes (Extract 6.1, page 153)). There are also specialist programs available on personal computers for sorting and analysing qualitative field notes (Section 3.6.2) .

Stage six involves assessing whether the 'horizontal' reading by theme makes sense. When reading through the data that has been allocated to each theme does it provide a cohesive account (like chapters in a book) or does it appear to be incomplete? Is there any interrelationship between the themes? Is such interrelationship consistent? Do other themes, that you had not noticed originally, appear to be emerging when you read the data horizontally?

Stage seven involves identifying additional themes, or removing or collapsing the first selection of themes until another, more useful and revealing set of themes emerges. The data is then re-read using the new themes. Another set of piles is built by cutting up a second copy into sections on the basis of the new themes and these are read. Using a software package, of course, enables you to do this without literally having to create a new copy and cut it up.

Stage eight involves asking whether the new system of themes works better than the previous one. If so, it might be the basis of the analysis and report, or it might be necessary to derive yet another set of themes because the current one is still not quite right. It might be that the first breakdown into themes was better than the second one, so you go back to the previous version.

Finally, the report is organised around the themes that have been identified and the most revealing and clear examples from amongst the separate theme piles are used to illustrate the report.

Make sure that you do not overload the report with examples. Inevitably, there will be examples that you would like to include but cannot because they simply repeat other examples.

Although Harvey and MacDonald spelled this approach out in 1993, it remains a generic approach, despite computerisation. For example, Fielding (2004, p. 301) repeats this scheme in shorthand:

The procedures most researchers use to manage and prepare data for analysis are quite straightforward. These involve: compiling the corpus of data (field notes, transcripts); searching for categories and patterns in the data; marking the data with category (or 'code') labels; and constructing thematic outlines using the codes to lay out the sequence in which topics will be considered. These procedures formerly involved the physical manipulation of data (literally cutting up data and sorting them into sets of associated extracts) but nowadays the process can be conducted online (although some still prefer 'manual' methods, especially for small scale studies. (Fielding, 2004, p. 301)

Activity 3.6.1
CASE STUDY Dixie's Place and CASE STUDY Fast Food Restaurants
are summary accounts of two participant observation studies of restaurants. The language, style and concepts of the original are retained. Read the two reviews and compare the way they approach participant observation research. In particular, what theories do they use and in what context do they set the activities?
This activity involves critical reading and analysis. About 30 minutes as an iindividual activity.

Top

3.6.2 Electronic qualitative data analysis packages
A variety of computer programmes have been developed to help you deal with large amounts of qualitative data. Computer Assisted/Aided Qualitative Data AnalysiS (CAQDAS) packages are specifically designed to assist with qualitative data analysis. These include ATLAS.ti, NVIVO (formerly NUD*IST) ETHNOGRAPH and Framework (both now also integrated into NVIVO) QDA Miner, Dedoose, HyperRESEARCH, MAXQDA, Qualrus and TextQuest?. In addition some more limited web-based systems such as Saturate and TAMS Analyzer enable categorisation, theme identification, memoing and coding of text and, in some cases, audio data.

Some generic databases that can handle extensive qualitative data, such as FileMaker Pro and, in the past, HyperCard can also be used. If you don't have access to sophisticated expensive packages there are some freely available downloads that you could try (and these tend to come and go on the Internet so you need to explore).

Computer packages tend to change rapidly, new ones come on the market and others disappear, some are significantly modified. The following does not provide a detailed account of a specific package, such details would, in any event, be out of date very quickly.

The early attempts at producing software aids for qualitative data analysis began in the 1960s but it was the 1990s before they became more sophisticated and useful. Some in the second decade of the 21st century are solely text based but others can deal with images, sound and video (some such as Transana specialise in digital video or audio data). Some software packages incorporate quantitative analysis alongside qualitative data analysis aids. Most of these packages need a significant training period (one or two days) before they can be used effectively.

Software aids for qualitative research are designed to deal with some or most of the following: transcription analysis; coding; annotating; summarisation; text interpretation; thematic analysis; text-mining; content analysis; discourse analysis, readability analysis; recursive abstraction.

In all cases the text or image has to be prepared in some form prior to applying the software. This may include pre-coding data in some cases, while in others the program uses semantic rules to suggest or apply coding. This then suggests limits to its applicability. You need to weigh up the time to learn the software plus the data preparation time versus the time to do the whole thing without the CAQDAS software.

In all cases, and this is extremely important, you need a clearly defined research purpose, analytic strategy and methodology before using a software package to analyse qualitative data. The package helps you do mundane jobs, it does not think for you. It may suggest associations and themes that you had not thought of but you need to address these conceptually to see if they make sense not just accept them as a basis for theorising. They may just be contrived or coincidental nonsense.

It is you, as a the researcher, who is in charge of the analysis; do not become lazy and allow the software to dictate what you should be looking at. This is why you need to be (a) familiar with the data before analysis (b) have a clear understanding of your research purpose (c) understand what the software package is doing and how it generates its outputs. There is no software that can do the conceptualising for you. If you throw in data into the machine without any idea of what you are doing you will just get rubbish back out.

In their study of 'foodwork' (buying and preparing food), Beagan et al. (2008, p. 657) undertook observations and interviews with 46 families.

Taped interviews and grocery trips, and the researchers' observation notes were transcribed and analysed using qualitative data analysis software, ATLAS/ti. Themes were generated in an in-depth examination of the transcripts by sorting, clustering and comparing segments of transcribed text to describe, organize and interpret participants' rationales for foodwork....

They concluded that the assumption, over several decades, of decreased gender inequity in domestic labour does not appear to be happening.
Rather, traditional gender roles seem to reinvent themselves in new guises. In foodwork, the same old gender expectations persist albeit no longer couched in sexist terms but constructed rather more complexly referring to individual choices, preferences and standards of work.

Top

Next 3.7 Summary and conclusions