In this paper the authors apply the concept of 'Social Navigation' (Dieberger et al., 2000) to the design of an online courseware system. The authors see this in terms of integrating implicit and explicit feedback and presenting that back to individual users. They indicate that explicit feedback is more reliable but harder to get users to perform (Claypool et al., 2001). This matches with my own perception/experience, but I wonder to what extent that is changing as web interfaces have evolved to make feedback easier?
The archetypal explicit feedback that seems unlikely to elicit responses is the multi-part web form that appears at the bottom of places like Microsoft support pages. I guess the fundamental personal economics of giving explicit feedback are the same however easy or hard it is to provide the feedback, but web 2.0 interfaces like the like/not-like toggle in facebook, or the clapping hand icon in smart.fm that allow the user to leave, albeit simplistic, feedback at the touch of a button must increase the likelihood of users providing explicit feedback. I have also noticed a feedback tab on a number of recent websites that opens up a feedback form that includes links to recent comments from others (appears to be part of the get-satisfaction system that I previously commented on), and makes it easy for the user, but doesn't force them, to classify their feedback or just add their support to the existing feedback of others.
When I say the fundamental dynamics aren't changed by these interface developments I mean that given a user with a particular goal, e.g. fix some software problem, the user is likely to want to get back to their original task rather than leave feedback on a site that has provided helpful information, and so any time spent on feedback is wasted time, unless their is some social aspect to the equation, e.g. you've asked on a mailing list and failing to thank those who answer your requests is likely to impact your ability to get support in the future. Naturally different users are under different time pressures, so reducing the amount of effort/time required to leave feedback will likely increase numbers leaving feedback, but to what extent? Superficially it seems to me that the difference might not be very great unless social factors are involved. I'm unlikely to get reciprocal benefit from Microsoft by leaving feedback on one of their support pages, but if there is 'Social Translucence' of the form that others can see the positive or negative feedback I leave on others contributions, that subsequently impacts my reputation, then the whole dynamic might be changed.
Anyway, that meandering digression aside, in the paper the authors conclude that implicit feedback by itself is also insufficient due to low accuracy, although I'd like to follow up there and read more about those implicit metrics. Anyhow, the authors advocate combining implicit and explicit feedback into a do-it-yourself approach where users natural interaction with the system generates a mix of implicit/explicit feedback. Not quite sure I buy that classification, but I do very much like their approach. The key is that they attempt to make achievement of a personal goal dependent on their contribution to the community.
The authors make adjustments to a real university courseware system as shown in the image to the left, modifying the lists of courses to include indicators of workload and relevance to the students career goals. The key to the authors' do-it-yourself approach is that students also get presented with 'CareerScope' information about their progress towards their career goals which is drawn from how they have rated courses they have taken. This apparently changes the dynamic so that rating courses has a benefit for the student as well as for the community. Experimental results appeared to show that when the CareerScope component was included and used it almost doubled the number of evaluations made by students.
My main concerns with the approach taken is that the progress towards career goals is set somewhat arbitrarily, i.e. taking four courses of relevance to a career goal that have medium difficulty constitutes achieving that goal. The authors are aware that this metric needs further evaluation, but I was unclear whether progress was calculated using individual students relevance assessments versus community relevance assessments. For example, I can imagine that students might find it all too easy to make progress towards a career goal simply by assessing the four courses they had taken as highly relevant and very hard. There seems to be an assumption that students will not 'game the system' and try and bump up their progress by evaluating courses in a particular way. Conversely it may be that the system uses community averages, which would prevent this sort of gaming, but then an individual student might find that their perception of career relevance is different from the majority, and might feel that they are making real progress that is not being displayed. Anyhow, I guess these are relatively minor niggles, or better put, they are interesting future research directions.
I think the overall approach is excellent, and it occurs to me that it is addressing the fundamental issue of human endeavour - how to get lots of self-interested individuals to work for the good of the whole. There must be so much literature on this outside of computer science related recommender systems etc. How to get individuals to contribute to communities? Give them a framework where they are contributing as a side effect of activities that benefit themselves - although I think it is challenging to think of great solutions like this for arbitrary systems. I guess this approach is a mixture of implicit and explicit, although I would have thought that was like getting a rating and time spent on page and taking some function of the two. Here the users are rating explicitly, but they are being asked to do it in a different context ...
Cited by 18 [ATGSATOP]
1. Bretzke H. and J. Vassileva (2003). Motivating cooperation on peer to peer networks. Proceeding of 9th International Conference on User Modeling .
2. Brusilovsky, P. (2001). Adaptive hypermedia. User Modeling and User Adapted Interaction 11(1/2): 87-110.
3. Cheng, R. and Vassileva, J. (2005) Adaptive Reward Mechanism for Sustainable Online Learning Community (Cited by 25). In Proceedings of 12th International Conference on Artificial Intelligence in Education, AIED'2005.
4. Claypool, M., Le, P., Waseda, M., and Brown D. (2001). Implicit interest indicators (Cited by 290). In Proceedings of ACM Intelligent User Interfaces (IUI 2001), Santa Fe, New Mexico, USA, 33-40.
5. Dieberger, A., Dourish, P., Höök, K., Resnick, P., and Wexelblat, A (2000). Social navigation: Techniques for building more usable systems. Interactions,7(6), 36-45.
6. Smyth, B., E. Balfe, et al (Cited by 150). (2004). Exploiting Query Repetition and Regularity in an Adaptive Community-Based Web Search Engine. User Modeling and User-Adapted Interaction 14(5): 383-423.
7. Harper F. M., Li X., Chen Y., and Konstan J. (2005). An economic model of user rating in an online recommender system (Cited by 17). In Ardissono L., Brna P., and Mitrovic A. (Eds.), Proceedins of 10th International Conference on User Modeling (UM 2005), Edinburgh, Scotland, UK.
8. Ling, K., Beenen, G., Ludford, P., Wang, X., Chang, K., Li, X., Cosley, D., Frankowski, D., Terveen, L., Rashid, A. M., Resnick, P., and Kraut, R. (2005). Using social psychology to motivate contributions to online communities (Cited by 166). Journal of Computer-Mediated Communication, 10(4), article 10.
9. Miller, B., Albert, I., Lam, S.K., Konstan, J., and Riedl, J. (2003). MovieLens Unplugged: Experiences with a Recommender System on Four Mobile Devices, (Cited by 19) Proceedings of the 17th Annual Human-Computer Interaction Conference.
10. Sarwar, B., Karypis, G., Konstan, J., and Riedl J. (2001) Item-based Collaborative Filtering Recommendation Algorithms, In (Cited by 132) proceedings of 10th International World Wide Web conference.