Emic and etic

Just trying to get my head around the terms emic and etic. Obviously Wikipedia is useful as ever  althought James Lett’s article offers a bit more depth.

In essence “emic” and “etic” describe different kinds of data that social researchers collect.

Emic describes the behaviour, beliefs and attitude of people from within a particular culture.

Etic describes the behaviour, beliefs and attitudes of observers from outside of a culture particularly in relations to the observation of other cultures.

The question is then how does one value the etic perspective in relation to the emic. Does the etic (if done well) offer an objective/scientific analysis of the emic? Alternatively is the etic just another equally subjective perspective which has no more intrinsic value than the emic? How you answer that depends on how you see the world and how much faith you place in the world view of objective, professional or scientific observers.

However, even if you don’t believe that the etic perspective represents “truth” or “science” it is still possible to see the concept as having some usefulness. If the etic represents the struggle to understand and to analyse, to identify patterns and to ask why it clearly offers something that the immersive knowing of the emic perspective does not. To understand social and cultural activity we probably to both inhabit the emic and construct the etic.

Does that sound about right?

Using mixed methods in online research

I’ve been thinking about online ethnographic methods a bit recently and I chanced across an interesting article entitled Combining ethnographic and clickstream data to identify user Web browsing strategies which essentially explores how you can draw together quantitative analysis of the statistics which are generated by user behaviour with qualitative investigations of the how’s and whys of that method.

The paper is interested in the ergonomics of websites. How do users interact with websites and how can we improve this esperience for them. As anyone who has ever had any level of responsibility for a website will be aware it is possible to access a vast range of statistics about people’s behaviour online. We can find out what pages people have visited, how long they have spent on a particular page, where they came from and where they are going. The problem with this kind of data is two fold, firstly it is really difficult to get hold of any meaningful benchmarking data. How long should people spend on a web page? Even if you can source an average or typical figure from somewhere, your page is different from those on other sites in its content, complexity, purpose and so on. Secondly it is difficult to use these figures to answer the why questions. Why are people on that page so long and that page so briefly? What is it that makes one popular and another unpopular. Web statistics raise issues, but they don’t really give you the tools to answer them.

The paper notes that it is also possible to gain insights into users experiences of websites by watching them and asking them questions (the paper refers to this process as ethnography, which I fear would be questionable by hardcore doctrinaire ethnographers). The study describes a three component methodology:
1. 86 students were asked to interact with a website that supported their course. The web statistics for that site were then analysed.
2. Surveys were administered to the entire cohort in week two and week eight of the course. 74 completed in week 2 and 60 in week 8.
3. Six participants were interviewed and observed during their second week of using the website in the University computer lab. Two of these were then observed in week eight using their own laptops to access the site from home.

What was found from this particular study is actually pretty interesting, even if you don’t care about web usability. One of the major findings was that the web statistics were not giving an accurate picture of the usage of the site. Students use of the “back” button was not being measured and recorded and so the web statistics were giving a misleading picture. This only became apparent when students were observed and observations compared with the data that was produced. In additon to highlighting some of the technical limitations of the web statistics, the observational data also enabled new analysis of results. So “habits of walking away from the computer during study to prepare food, visit people, or simply take a break from reading, meant that time spent on any one page became virtually meaningless as a determinant in assessing behaviour”. Previously researchers had felt that time on a page would correlate with the level of interest in that page, but observational findings suggested alternative analyses that the web statistics were not able to answer. In the main however the study found that qualitative and quantitative observations supported each other and provided a useful triangulation of results.

The article finds the mixed methods approach to be extremely powerful both in helping to analyse and explain the data that is gathered, but also in allowing for methodological development. It is only once we know the problems with the data we have been collecting , that we can begin to adjust our methodology and collect the data differently.

David Brooks on the social animal

This is an interesting film of David Brooks talking about education, emotion and how we need to recognise that people are social animals. Much of his argument is closely connected to the arguments that I talked about discovering in the Decisive Moment. In essence he argues that there are limitations to conscious rationality and that people are highly complex, but that this complexity is great and often leads to better decisions than a narrow rational approach. Brooks also starts to talk about how narrow rationalist approaches have warped policy development and I’d like to hear more about what he’s got to say about that.