Studies Mislead, Details Matter

It's been almost a year since a huge federal study of hormone replacement therapy indicated that - contrary to previous studies - women may be at risk when taking a hormone replacement drug to deal with symptoms of menopause. In that intervening year, scientists and statisticians have failed determine which study is incorrect, or if both sides tell part of the truth. The fact that these different methodologies cannot seem to be resolved in the intensely-scrutinized realm of medical studies indicates that many of our market research efforts must be hopelessly simplistic.

The most respected study showing benefits of hormone replacement was what is termed an "observational" study, where thousands of subjects - nurses in this case - reported on their drug use and their health. Drawing a relationship between the two, the study showed a 30% reduction of heart attacks among those taking the therapy.

The new federal study used the more respected "double blind" method of testing, giving some people therapy and some people placebos. Their results showed a 40% increase in heart attacks in those taking the therapy. The results were divergent enough to disturb the medical community, making doctors question the research methods themselves.



To draw an analogy, this is sort of the difference between observing consumer behavior through server logs versus taking online surveys. We would think that both should lead us in the same direction, but this often doesn't bear out.

To be meaningful, it turns out, the study of consumer behavior needs to keep track of a vast number of details. Theoretically, those details may be there in the server logs, but it's infrequent that a study actually bothers to account for all of the subtle influence these additional dimensions might exert. This is why a great deal of consumer research is more of an exercise in marketing departments patting themselves on the backs, showing how successful they've been. To truly draw insight out of the data, we need to get very nitty gritty.

We need to know what mental mode different users may have been in when they exhibited certain behaviors - this may be something derivable from traffic patterns - and draw assumptions about intent in order to properly establish market learnings. When, instead, we generalize the whole audience, the learning is at best lost in the average. Or, at worst, the figures show an average that is actually deceptive.

I actually feel a little bit better about the poor state of our knowledge of online behavior relative to our potential. It's not just us, and it's not just because much of our research is self-serving. Even the brain surgeons are having difficulty, which makes a liberal arts major like myself feel much better. But from here we need to determine what levels of additional detail we need to add into our analysis and develop means of acquiring them.

Next story loading loading..