Codification/Reduction of Data

I wrote my Peer Review on the McMillan “Soap Box” article. My primary criticism of the research centered around McMillan's coding or “reduction” of data. All I ended up seeing was the seeming arbitrariness of interviewee categories into a neat (but incredibly vague and questionable) table. McMillan didn't seem to consider that the interviewees might use a variety of media outlets on a day-to-day basis. Personally, I find it difficult to define “media,” and tend to use a variety of different sources. McMillan also didn't seem to consider researcher bias (Luker's “fish problem”) or alternate viewpoints. Some of the categories consisted of only one interviewee, which led me to think that a much larger interviewee base (or quantitative methods) might have helped. I really didn't think that this kind of “redundancy” can be claimed after 18 interviews.
Because of the Soap Box article, I was re-considering incorporating a mixed method approach in my research design. However, I was brought back to Earth after reading Knight, who points out that drawbacks of cost and faulty judgments are involved with quantitative research. I would tend to agree with his advice, “The temptation to exploit the potential of numbers needs to be resisted unless the data really are of the right sorts” (p.177).

Census and Research

Recently, I had to do a debate which concerned the Long Form Census – and whether it should be cancelled or not. That exposed me to this singular method and authority of collecting data from a large number of the population. It also brought forth the fact that an equally immense number of organizations are dependent on that data for their research work as well as the fact that the information collected through the census is subjected to considerable and continuous analysis.

The Canadian Research Data Centre Network (CRDCN) in its website states that since 2000, it has been in partnership with Statistics Canada to “transform quantitative social science research in Canada”. Researchers analyze the census data to enhance their understanding of the Canadian society. The census functions as the primary source of information about the population of Canada. It is in fact a benchmark against which all other data are measured and evaluated. It provides knowledge about language, education, income, housing, geographic mobility, ethnicity and so on. It is widely used by policy makers, city planners to businesses and marketing researchers and NGOs. Even though it does have issues regarding privacy, it is also true that it provides us researchers with an unimaginable treasure trove to dig into.

Immersed in literature

Hine's (2004) examination of Internet ethnography, just like Wheeler (2010) and Miller and Slater (2000), among many others, conceptualizes the Internet as a 'place', a 'network' or a 'community'. One thus studies a part of the Internet just as one would study a village, a grassroots association or a practice - by examining the people linked to them and the relationships between them.

However, discussing my thesis proposal with Professor Grimes today, it hit me that not all parts of the Internet are conducive to this type of study. In fact, some parts of the Internet would better be qualified as 'technologies' or even 'artifacts' than 'places'. This applies to my chosen area of study, the US Government's geographic information system. While an ethnographic lens, particularly the one described by Star (1999), may be useful in examining the politics and of the GIS, Pinch and Bijker's (1984) social construction of technology framework/method might provide the right bridge between relationships and technology.

In this line of thought, at the DIY Citizenship conference at the University of Toronto this weekend, Ron Deibert of the Citizen Lab talked about the methods that his team used to study cyber attacks on the Office of the Dalai Lama in Dharamsala, India. Deibert discussed what he termed 'fusion methodology' which consists of field methods (participant observation + focused interviews) and technical interrogation (in-depth analysis of the technologies in play). This gives equal value to the social interactions and the technology itself, differing from Star's method which examines technology only as a small part of the ethnographic study.

The final report, entitled Shadows in the Cloud and produced by the Information Warfare Monitor and the Shadowserver Foundation, provides an interesting description of the mixed method - definitely worth considering for those approaching their research through science and technology studies.

Oh my Facebook

I'm using an article entitled “The Librarian as Video Game Player” (Kirriemuir, 2006) for an INF1300 Annotated Bibliography project, and I thought it brought up an interesting argument. Kirriemuir states that gamers are typically capable of multi-tasking, of using sophisticated information-locating resources (online and offline), of installing hardware, and of using social networking tools effectively (all of which happen to be fairly relevant skills in a library). I narcissistically like to think that I possess many of these strengths, save one.

Yes, in a futile effort to recover my humanity, I have deleted my Facebook account. I hope I don't come across here as someone who thinks he is “above” social networking sites, but part of it had to do with the way people whip out their smartphones in the middle of social gatherings. Another part of it had to do with the fact that I don't actually care how well my friends are doing in Farmville. The list goes on but I'm pretty certain I'll have to reactivate at some point, for some reason or another. It's simply too ingrained in our culture.

I thought this topic was quite relevant to the Orgad article, since the blurring of lines between online and offline lately seem fairly substantial. I would agree that the Internet is an extension of people's lives, and that studying online/offline in conjunction could yield valuable insights, depending on the research question.


Detecting spam in Twitter Network

Now a days Twitter has become one of the biggest social networking sites around. Twitter has some useful features which make it especially helpful for research on almost any topic under the sun. This is because Twitter pages are viewable to anyone, even those without accounts, and the site has a search function which pulls in all recent posts dealing with a given topic or phrase. Twitter is a micro blogging service where users can post 140 character messages called tweets. Unlike Facebook and MySpace, Twitter is directed, meaning that a user can follow another user, but the second user is not required to follow back. Most accounts are public and can be followed without requiring the owner’s approval. With this structure, spammers can easily follow legitimate users as well as other spammers.
As online social networking sites become more and more popular, they have also attracted the attentions of the spammers. In the article of First Monday, “detecting spam in twitter network” by Yardi and others, they mentioned that Twitter, a popular micro-blogging service, is studied as an example of spam bots detection in online social networking sites. A machine learning approach is proposed to distinguish the spam bots from normal ones. To facilitate the spam bots detection, three graph-based features, such as the number of friends and the number of followers, are extracted to explore the unique follower and friend relationships among users on Twitter.
Unfortunately, spam is becoming an increasing problem on Twitter as other online social network sites. Spammers use Twitter as a tool to post multiple duplicate updates containing malicious links, abuse the reply function to post unsolicited messages to users, and hijack trending topics.

Network( -ed / -ing)

Though there is no novelty in thinking of cultural complexity, this week’s readings has underscored its awe through discussions around the formation of research methods on the Internet. The same metaphor kept popping up for me, and even explicitly stated a number of times: that of the network. 

Through Hine’s ethnographic analysis (as studying phenomena requires a point of origin and a definition of perimeter) to Orgad’s online/offline discussion (to which these difference data should “mutually contextualize themselves” [pg 48]), the readings presented phenomenons as a blurred interconnected array of factors that any research method must be conscious of. 

I am a little intrigued about the network metaphor doubling in both the Internet’s architecture and this theoretical understanding of the world (I doubt if its done tounge-in-cheek, though I don’t think it’s completely haphazard). Regardless, the metaphor serves as a valuable insight to both the Internet and ethnography broadly. I was reminded of a recent lecture (image below) about the increasing intention of web design to tap into this network idea, where offline/online behaviours are so connected it becomes blurred (think Foursquare, NikeRun, and continuing experiments of networked objects and cities). While canonical methods approach the Internet as an artifact, web designers are attempting to evolve the Internet as culture; in the meantime, researchers, like us, are attemping to adapt to both.