I wrote my Peer Review on the McMillan “Soap Box” article. My primary criticism of the research centered around McMillan's coding or “reduction” of data. All I ended up seeing was the seeming arbitrariness of interviewee categories into a neat (but incredibly vague and questionable) table. McMillan didn't seem to consider that the interviewees might use a variety of media outlets on a day-to-day basis. Personally, I find it difficult to define “media,” and tend to use a variety of different sources. McMillan also didn't seem to consider researcher bias (Luker's “fish problem”) or alternate viewpoints. Some of the categories consisted of only one interviewee, which led me to think that a much larger interviewee base (or quantitative methods) might have helped. I really didn't think that this kind of “redundancy” can be claimed after 18 interviews.
Because of the Soap Box article, I was re-considering incorporating a mixed method approach in my research design. However, I was brought back to Earth after reading Knight, who points out that drawbacks of cost and faulty judgments are involved with quantitative research. I would tend to agree with his advice, “The temptation to exploit the potential of numbers needs to be resisted unless the data really are of the right sorts” (p.177).
-Martin