The authors describe their own analysis of pricing and behaviour in the Google Answers system and compare with the analysis of Edelman (2004). Edelman found that more specialized answerers earned less per-hour. Edelman explained that when a researcher stays within a particular field they lose opportunities in other fields. The workshop paper refers to some interesting economic properties of information such as:
Information is expensive to produce and cheap to reproduce (Bates, 1989; Shapiroand
and Varian, 1999).
... (information's) value is revealed only after consumption (Shapiro and Varian, 1999; Van Alstyne, 1999).They also note that:
Behavioral research revealed that the value of information is derived from perceptions of at least three central elements: cost, quality, and ownership (Toften and Olsen, 2004; Raban and Rafaeli, 2005).The researchers' initial inspection of the data suggested a correlation between economic incentive and amount of questions answered; while tips were only very weakly correlated, and the socially constructed ratings were not correlated at all.
Of course I should have been reading and citing the author's subsequent journal paper: Rafaeli, S., Raban, D., Ravid, G. How Social Motivation Enhances Economic Activity and Incentives in the Google Answers Knowledge Sharing Market. Int. Journal of Knowledge and Learning 3, 1 (2007), 1--11, but I had the earlier conference paper printed out ... but then again the technical paper ends rather abruptly with the conclusion that:
... when interaction is present, the social parameters of rating and comments contribute incentives to the formation of participation, beyond the role of economic incentives ...Which I didn't quite follow from the conference paper, but this gets explained in more detail in the subsequent journal paper. It seems that if one considers only those questions that generated a discussion (at least one comment) then tips and social ratings are correlated to the question being answered. More detailed analysis in the journal paper of the mean ratio of comment per answer indicated that comments from the community given before answers from experts is correlated with the likelihood of answers from experts, but only at the level of individual experts. This is in contrasts with a reduced number of questions answered by experts where comments are present, if the analysis is not at the expert level, but across the entire site. The authors explain this at the site level as follows:
... if sufficient help was provided by a comment, there is less need or room for an answer. Ethical experts will not post a paid answer where an informative comment was submitted.whereas the converse relationship at the individual expert level in this fashion:
Experts seem to be drawn to questions that generate much interest, comments, activity and tend to answer those questions more often. This may fill a social need but may also serve an economic purpose of enhancing the expert's reputation by getting exposure to more eyeballs.This contradiction at the different levels of analysis only makes sense to me when one takes into account that comments may, or may not, answer the question posed. If the answer is contained in the comments they will serve as a dis-incentive to expert answers, whereas comments that do not answer the question are indicative of interest in the answer and as such serve as an incentive.
The authors consider two possible explanations to this and the related finding of increased levels of tipping for individual experts when questions have many comments:
... comments may enhance the overall perceived quality of all knowledge provided, as answer or comments, so the asker becomes more inclined to tip ... (or) ...it may be that the asker feels some social pressure by the presence of the comment contributors which leads him/her to provide a tip as a social norm.It seems to me that the latter explanation is the more likely, and the authors go on to cite a number of papers on the subject of social facilitation, which suggests they also lean towards a similar explanation. However it is difficult to imagine an objective experiment or analysis which would tease these two possible explanations apart. Although one might experiment with auto-generated comments or something similar.
This makes me think that critical mass on an individual piece of information or across an entire site may be a function of the likelihood with which individuals perceive their activity as being observed and/or approved of my others. For example, if lots of others, particularly existing colleagues of respected individuals, are undertaking certain activities in regard to a post, or towards a site, then this increases the likelihood of an individual participating, and when this likelihood (combined with frequency of encounter) reaches a tipping point over the system as a whole then the critical mass needed to achieve a thriving online community will be achieved.
I originally found this paper because it cited Ling et al. (2005), which I consider a seminal study on motivation in online communities. I am particularly interested in how certain design patterns influence participation in online communities, and Ling et al was one of the first papers that I found that took an experimental approach to this subject; something I perceive to be missing from the existing research on online community design patterns.
The journal paper has a couple of non self citations at the time of this post http://scholar.google.com/scholar?cites=13346321855162398665
There do appear to have been a few other analyses of the Google Answers since this paper was published http://scholar.google.com/scholar?q=%22Google+Answers%22
Interestingly google answers shut down on December 1st 2006. Other services such as Mahalo Answers, and UClue have sprung up in its place. There were also some non-monetary services that ran in parallel with Google Answers and continue to thrive such as Yahoo Answers and Answer Bag. Wikipedia and this blog post have a good discussion of some of the reasons for the shutdown.