Thursday, December 3, 2009

Mobile Social Navigation apps

I recently read two mobile social navigation studies, Barkhuus et al (2008) and Bilandzic et al (2008), that I wish I had conducted myself (particularly Barkhuus et al). Although the results from Bilandzic et al that people were unlikely to phone complete strangers for help finding coffee shops did seem somewhat obvious :-) Interestingly the solution to this problem that Bilandzic et al suggest is similar to the awareness approach taken by Barkhuus et al. So a very interesting couple of papers to read in parallel.

Furthermore some interesting notes about recent commercial technological developments. The CityFlocks system developed by Bilandzic et al has for the most part been replicated by the default google maps application on android phones, in that if you now search for a restaurant on an android phone you get an aggregated list of the reviews of that restaurant from multiple review sites, which includes some information about the reviewer (i.e. their name), if not their contact details. This functionality is not yet available on the iPhone google maps, although it is slated to become available in the future. Latest updates about google mobile stuff here: http://googlemobile.blogspot.com/

Similarly the functionality developed for Connecto by Barkhuus is almost completely replicated by Google Latitude (http://www.google.com/latitude/intro.html) in that you can see the locations of your friends, and you can set your location manually or automatically. The big thing missing from Google latitude (which runs on iPhone (in browser) and Android) when compared with Connecto is that it doesn't appear to run in the background (at least on the iPhone) and doesn't integrate the information about friend status into the contact list, which I think is one of Connecto's great features. I'm assuming that this kind of integration (friend location and contact list) will be hard on the iPhone, potentially easy on android. Here's a nice blog post showing you what google latitude looks like on the android:

http://androidcommunity.com/google-latitude-location-sharing-app-hitting-android-20090204/

Also of potential interest is that google maps on the android now supports layers, e.g. wikipedia, traffic, your google my maps and more. Here's a link on that:

http://www.streetmapmobile.com/20091203/google-maps-for-mobile-layers-2

We live in "interesting times", and it seems like Google is behind a lot of it :-)

Saturday, November 28, 2009

Economic Thoughts

Probably due to listening to NPR again I have been having economic thoughts about how odd it is that people's willingness to expose themselves to debt affects the bottom line of many businesses, which in turn affects the disposable income of people who might be employed by those businesses, which links back to consumer spending power.

Seems like a steady state might not be possible, i.e. just endless cycles of boom and bust. Still I wonder if there is some way to quantify the value produced by working for a lifetime, or for just one hour, that would allow us to calculate the extent to which an individual or a society is overexposing themselves in terms of debt? Problem is that it is highly non-static and that value is in the eye of the beholder.

However, perhaps we can think of our monthly icomes as indicative of the worth that society places on what we are doing. Money is strange. I guess it is straightforward to work out debt exposure based on monthly income, but somehow the relationship between money and value seems a little broken ... e.g. We can be doing all sorts of things that we don't immediately get paid for, but produce longer term benefits, like raising children, or networking with colleagues. Guess I'm just wondering if there are any economic alternatives to money ...

Monday, November 2, 2009

Mobile Programming @HPU Spring 2010

I'm teaching a mobile programming course at Hawaii Pacific University (HPU) in the Spring. Will focus on iPhone and Android. Here's the link to sign up for single courses at HPU if you're not already enrolled there:

http://www.hpu.edu/index.cfm?contentID=373

I'll be teaching in HPU's mac lab so we'll cover both iPhone and Android programming. The plan is for every class member (or team) to have an app published in the iPhone app store and the Android marketplace by the end of the course.

Monday, October 19, 2009

Gravity thought experiments

For some reason (probably reading my son "The Magic School Bus Lost in the Solar System"), I started wondering about gravity. I was wondering why my sensation of gravity is just being pulled down, and not to the sides as well. I guess the mass to either side of me in the Earth is roughly equal and thus cancels itself out. Which made me wonder how something would fall if you happened to be standing on the edge of a hemisphere, or if there was a cylindrical space running through the center of the Earth. In the later case I would have thought that something might fall down past the Earth's center overshoot, and them fall back, oscillating until it eventually came to rest a the center. Not sure about the hemisphere example. Seems like something dropped might actually fall diagonally. Of course the natural rest state of objects large enough to have noticeable gravity is a sphere, so such toroidal or hemispherical objects would not occur without some kind of intervention. The funny thing about gravity is that all matter attracts other matter. I don't notice being sucked towards the computer because it has so little mass, but the mass of the Earth is huge and sucks me towards it. I wonder if there are objects in the universe massive enough for gravity to be noticed, but in different shapes so that the force of gravity is felt in directions other than straight down ...?

Tuesday, October 6, 2009

SmartFM Mobile Study Dictionary Upgrade

So I released an upgrade (1.0.1) of the SmartFM android client, adjusting the media type for audio uploads (although the AMR uploads are still only audible on the phone and not the main site) and disabling the voice input when the google speech recognition package is not available. This latter was in response to a marketplace comment from Zom-B about force closing on voice button.

Of course this makes me want to say a few things about the marketplace comments, which are potentially a great resource, but are kind of odd in that they can be signed under any name, and there is no way to reliably communicate back with the people who made the comments. For example, originally SuaveAfro made the comment that the app should support downloading other people's lists. I replied to SuaveAfro in my own comment explaining that that functionality was included. Subsequently SuaveAfro became Havoc, and so I updated my own comment accordingly, but that moved my comment away from the one it was referring to. I guess the solution is that I should update my comment to reflect whatever is my response to the most recent comment, but it seems less than optimal. Would be nice if the commenting framework could link directly to a discussion forum on that app, but I guess that's overkill. Comments are a simple solution - would be nice if they were at least linked to a user's email so I could have a fighting chance of getting responses back to users who are unlikely to look at the comments again after their first download ...

Thursday, September 17, 2009

SmartFM Mobile Study Dictionary live in Android Marketplace!

So shortly after my last blog post, I managed to get my SmartFm mobile study dictionary app live in the Android Marketplace. I think I finally managed to set it up so it won't conflict with the ADC version of the same app.

In less than 24 hours the app jumped into the the 100-500 downloads range, and we got some great feedback and ratings. Steve O's comment that it was "Very easy to use and definitions are very accurate. Excellent!" was music to my ears. I can't accept that praise without thanking Robert Brewer, George Lee, Viil Lid, Karhai Chu and Kim Binsted at the University of Hawaii for input on the developing interface, and huge thanks to the SmartFM team for their input on all aspects of application and various adjustments to the API to make everything connect up properly.

When I woke up this morning there was another comment from SuaveAfro, about wanting the app to allow the user to download other people's lists. I've commented back that other user's lists can be downloaded through the "Search Lists" function in the menu tab, although it is not an easy function to find, since you have to click "More" in the menu to get there. That is partly intentional, as the focus of the app is on items rather than lists, but I have to concede I haven't put up a help manual or anything that explains all that. I was so focused on the ADC submission that there is not much in the way of online support materials, and only a cursory help system on the app itself. I'll do my best to make better documentation available, but in the meantime I'll put as much as I can on this blog to help users of the application.

Tuesday, September 15, 2009

Video describing SmartFM Android Application

So the other week I submitted an android application to the second Android Developers Challenge (ADC). There's over a million dollars in prize money up for grabs from Google in the second such competition. Last time android was only available on emulators, but now the devices are out, so competition will be fierce.

I've put together a short video about my app, the SmartFM Mobile Study Dictionary, which shows you how I integrated the Google Speech Recognition component and got multimedia content creation working. I'm real excited about this application because now any user can create study content (focused on languages at the moment) on the go, uploading sounds and images. So say you learn a new word in the language you are studying you can check what it means using the dictionary, and if it doesn't exist, add the entry there and then. If it is there, you get helpful info on the word and its usage, and you can add new example sentences and usages you've discovered, with images and audio to illustrate.

The results from the ADC won't be in till late November, and owners of android devices should be able to vote on the best apps later this month (please vote for me :-), but I'll release this to the market before then, once I've worked out a few bugs.

Please check out the video and let me know what you think:

SmartFmMobileStudyDictionary.mov

Friday, August 7, 2009

Android: displaying one dialog after another

So having got my progress bar dialog to appear I now find myself prevented from displaying a second dialog to announce the results of the long running action.

Here is the code:

   final ProgressDialog myOtherProgressDialog = new ProgressDialog(
     this);
   myOtherProgressDialog.setTitle("Please Wait ...");
   myOtherProgressDialog.setMessage("Adding item to study list ...");
   myOtherProgressDialog.setIndeterminate(true);
   myOtherProgressDialog.setCancelable(true);

   // TODO spinner not showing for some reason ...
   final AlertDialog dialog = new AlertDialog.Builder(this).create();

   final Thread add = new Thread() {
    public void run() {
     AddItemResult add_item_result = addItemToList(
       Main.default_study_list_id,
       (String) item.item_node.atts.get("id"));
     
     dialog.setTitle(add_item_result.getTitle());
     dialog.setMessage(add_item_result.getMessage());
     dialog.setButton("OK",
       new DialogInterface.OnClickListener() {
        public void onClick(DialogInterface dialog,
          int which) {
        
         return;
        }
       });
     
     myOtherProgressDialog.dismiss();
     //Looper.prepare();
     dialog.show();
    }
   };
   myOtherProgressDialog.setButton("Cancel",
     new DialogInterface.OnClickListener() {
      public void onClick(DialogInterface dialog, int which) {
       add.interrupt();
      }
     });
   OnCancelListener ocl = new OnCancelListener() {
    public void onCancel(DialogInterface arg0) {
     add.interrupt();
    }
   };
   myOtherProgressDialog.setOnCancelListener(ocl);
   closeMenu();
   myOtherProgressDialog.show();
   add.start();

The problem is that if I just try to show the second dialog I get this error:

java.lang.RuntimeException: Can't create handler inside thread that has not called Looper.prepare()

and if I try and edit the existing progress dialog to show the results of the long running action I am told:

android.view.ViewRoot$CalledFromWrongThreadException: Only the original thread that created a view hierarchy can touch its views.

Which is a bit frustrating. Is my only option to display results in a completely new activity?

Android dialog not appearing after menu action

So I have been having reasonable success getting progress dialogs to show up in android. It all seems to rely on showing the progress dialog and then starting whatever is the long running process in another thread, e.g.

 private void loadItem(Activity activity, final String item_id) {
ProgressDialog myProgressDialog = new ProgressDialog(activity);
myProgressDialog.setTitle("Please Wait ...");
myProgressDialog.setMessage("Loading item ...");
myProgressDialog.setIndeterminate(true);
myProgressDialog.setCancelable(true);

final ItemDownload item_download = new ItemDownload(activity,
myProgressDialog) {
public Vector downloadCall(SmartFmLookup lookup) {
return lookup.item(item_id);
}
};
myProgressDialog.setButton("Cancel",
new DialogInterface.OnClickListener() {
public void onClick(DialogInterface dialog, int which) {
item_download.interrupt();
}
});
OnCancelListener ocl = new OnCancelListener() {
public void onCancel(DialogInterface arg0) {
item_download.interrupt();
}
};
myProgressDialog.setOnCancelListener(ocl);
myProgressDialog.show();
item_download.start();
}


I've even managed to support user cancellation. All good; however I have a more complex situation where the progress dialog is supposed to be displayed after hitting a menu button, and then replaced with another dialog that shows the results of the process; which is followed by a second progress dialog. In this case the first progress dialog never appears but the result dialog and second progress dialog do. I've tried removing all the other dialogs, but the initial progress dialog still never shows up - it gets stuck on the clicked menu button (see image).

Here's the code with subsequent dialogs stripped out:


public boolean onOptionsItemSelected(MenuItem menu_item) {
switch (menu_item.getItemId()) {
case ADD_TO_LIST_ID: {
// send command to add to list - need spinner?

final ProgressDialog myOtherProgressDialog = new ProgressDialog(
this);
myOtherProgressDialog.setTitle("Please Wait ...");
myOtherProgressDialog.setMessage("Adding item to study list ...");
myOtherProgressDialog.setIndeterminate(true);
myOtherProgressDialog.setCancelable(true);

// TODO spinner not showing for some reason ...

final Thread add = new Thread() {
public void run() {
AddItemResult add_item_result = addItemToList(
Main.default_study_list_id,
(String) item.item_node.atts.get("id"));

}
};
myOtherProgressDialog.setButton("Cancel",
new DialogInterface.OnClickListener() {
public void onClick(DialogInterface dialog, int which) {
add.interrupt();
}
});
OnCancelListener ocl = new OnCancelListener() {
public void onCancel(DialogInterface arg0) {
add.interrupt();
}
};
myOtherProgressDialog.setOnCancelListener(ocl);
myOtherProgressDialog.show();
add.run();


break;
}
}
return super.onOptionsItemSelected(menu_item);
}


I think the solution must require programmatic closing of the menu popup. I did find this post on how to open it.   I was hoping to use that to open the menu bar at the beginning of my app, but calling that from the OnCreate method causes this error:

android.view.WindowManager$BadTokenException: Unable to add window -- token null is not valid; is your activity running?

However it seems I can use something similar to close the menu panel:


    public void closeMenu(){
        this.getWindow().closePanel(Window.FEATURE_OPTIONS_PANEL);
    }

However, even after shutting the open menu panel, I still don't get to see the progress dialog.

Ah, found the problem, I was calling add.run() instead of add.start, so the new Thread was never spawned. Duh!!!


Tuesday, July 21, 2009

Android audio formats - converting to mp3

So Android support recording of audio in the 3GPP/AMR format, however it seems that there is no built in support for conversion to mp3 which is what I need to be able to upload audio to a web service I am working with.

Apparently we can extract the AMR core from a 3GPP file but it seems I need some audio conversion library ...

One way round this might be to upload the file to another service that could do the conversion. There are various online conversion services, but I can't seem to find one that will do 3gpp to mp3.

I had also thought that there might be something in the android marketplace (and then I could outsource the conversion but still run it on the device), and there is one app, but it has a lot of reviews suggesting it does not work and may even be a virus.

There appear to be some people who have succeeded in porting ffmpeg to android:

http://discuz-android.blogspot.com/2008/12/struct-ipv6mreq-has-no-member-named.html
http://gitorious.org/~olvaffe/ffmpeg/ffmpeg-android

However, so far I have failed to even get ffmpeg to work from the command line with files extracting from the android simulator:

http://www.hiteshagrawal.com/ffmpeg/converting-audiovideos-using-ffmpeg

ffmpeg is open source, but I am not sure how to deploy code other than java to android.

So I guess I am going to give up on uploading audio from android and see if I can get images uploaded, and come back to this after I have some more input ..

Thursday, July 2, 2009

Sen et al (2007) Learning to Identify Beneficial Partners

Cited by 4 [ATGSATOP]

So this is another paper in my attempt to finish the background reading for an invited paper in the AP2PC'07 workshop proceedings.  I believe I found this one following a citation trail from Ben-Ami and Shehory (2007) and I think I grabbed it because it had "learning" in the title.  Peer to Peer is mentioned in passing, but this paper is really about a multi-agent system where individual agents have learning capabilities.  I know the first author from a panel session in AP2PC'05, so that is another connection, but I can't really remember if I had some more complex motivation for printing out this particular paper last October.

In principle I am reading this to help illuminate some of the ways that agent research can be of benefit to P2P researchers, but there is a part of me that is just interested in mathematical and algorithmic characterizations of "learning".  The paper itself introduces parallels between human and artificial agents trying to make critical choices about interaction partners; and this makes me think of the human interaction analogies in Ian Clarkes Masters thesis on Freenet, my own intuitions about pruning search in my NeuroGrid system, as well as the agent modelling in the paper I co-authored with Ben Tse and Raman Paranjape.  We are all humans and we interact with other humans most days, and so I guess it is no surprise that this sort of analogy crops up again and again; however I think there is a pitfall here.  Sometimes the analogies breakdown and our intuitions lead us astray - I think this is the case with mobile agents where our human experience of the greater efficiency of human face to face interaction suggests that sending a mobile agent across a network should be more efficient than static agents communicating with each other when in fact it is difficult to predict the relative efficiency of the two methods for mobile agents (Joseph & Kawamura, 2001).

The goal of the authors research is to try and discover which learning schemes will sustain mutually beneficial partnerships between agents.  Apparently algorithms which achieve equilibria in repeated play have so far been restricted to two-player situations.  This paper examines a population of agents that learn through the reinforcement technique of Q-learning (Watkins & Dayan, 1992).  The authors restrict their system to one where agents search through repetitive personal interaction; not through referral.

In the authors system each agent is of a particular type, and has preferences to interact with agents of other types.  Thus the potential reward that agents achieve through interacting with each other is a matrix of agents against types; and the matrix is designed such that some optimal solution of agent partnerships exists where no agent can get a greater reward by switching to interact with other agents.  Since the matrix of rewards is unknown to the agents, the Q-learning technique is used to update agents estimates of the rewards of interacting with each other.  Q-learning updates estimates through a combination of earlier experiences of reward with the current experience.  The extent to which experience influences current estimates is not varied, and the alpha parameter that determines this is not mentioned again in the paper, making a replication difficult.  However, in order to vary the agents level of explorative behaviour, i.e. the extent to which agents try out new interactions, the authors adjust the probability with which the agents select a random agent to interact with, rather than the one recommended by the Q-learning estimate.  By adjusting this probability over time exploratory behaviour is gradually reduced in what seems like a sort of simulated annealing.

No particular justification is given for the particular parameter settings or the approach used, leading me to wonder what the basis for this approach is.  Are these techniques similar to others in the literature, or are they based on any empirical observations of real-world phenomena?  Nonetheless, initial simulation results in a static environment show that a slow decay of exploratory behaviour is associated with the system taking longer to achieve equilibrium, but also with a higher final average payoff for the agents.  This certainly makes intuitive sense.

In subsequent simulations dynamic environments are explored where agents die off and are replaced if they fail to achieve a sufficiently high payoff within a certain timeframe.  As the environment becomes tougher and agents are killed off more quickly it takes longer and longer for the system to reach a stable equilibrium, although this can be mitigated by reducing the level of exploratory behaviour (see figure d is rate at which exploratory behaviour decays).  Again this makes intuitive sense.

In further simulations we see that protecting young agents can also help the system achieve equilibrium sooner, which also makes intuitive sense, and makes me think of the use of karma in online communities; or at least the way that new users will be given an initial chunk of reward or karma points.  Not sure how strong the parallel is here, but I guess you could model an online community in terms of multi-agents looking for beneficial interactions.  New users are entering the community at a certain rate, and not hanging around indefinitely.  They will need to have positive interactions within a certain time period before they will effectively remove themselves from the community; which makes me think of that paper that shows the effect of existing social network patterns on incoming users (wasn't it something to do with closed triangular relations) - should re-read that for my thesis project, if I can find it (probably in disCourse somewhere).

In a final section the authors experiment with introducing noise and we see that noise can have a similar effect to prolonging exploratory behaviour, i.e. taking longer to get to equilibrium, but perhaps finding a higher optimum.  My main concern with all this is the relationship to the real world where systems may spend much of their time away form equilibrium.  I see connections to other work that I have done on the evolution of intelligence (where we compared our models with populations of animals in the real world) and online communities, but a lot of the modeling decisions seem to be soewhat arbitrary.  It would be nice to know what was motivating them.

My references:

Joseph S. & Kawamura T. (2001) Why Autonomy Makes the Agent. In Agent Engineering, Eds. Liu, J, Zhong, N, Tang, Y.Y. and Wang P. World, Scientific Publishing.

ResearchBlogging.orgSandip Sen, Anil Gursel, & Stephane Airiau (2007). Learning to identify beneficial partners Working Notes of the Adaptive and Learning Agents Workshop at AAMAS

Wednesday, July 1, 2009

Ben-Ami & Shehory (2005) A Comparative Evaluation of Agent Location Mechanisms in Large Scale MAS

Cited by 3 [ATGSATOP]

So this was one of the papers I printed out last year since it cited Koubarakis (2003), and am reading as part of trying to put together an invited paper for the AP2PC 2007 workshop proceedings. Putting together the proceedings and the invited paper has been a somewhat ill fated process interrupted by the death of my father, the collapse of the industry grant that was funding my position at University of Hawaii, and the birth of my twin sons! I am back on track now, although new crises loom on the horizon I am actually making some progress reading all the papers I printed out last October (Sycara, 1998; Sycara, 1991; Rosenschein, 1993; Koubarakis, 2003). If I can just hang in there I can wrap this thing up by the end of the summer - fingers crossed.

So after reading this paper I think I probably should have been reading Shehory (2000) which describes the distributed agent location mechanism that this paper is evaluating in comparison with centralized location mechanisms. However this is the one I had printed out, so partly not to waste the paper, and also because it is easier to read paper papers when pushing the twin stroller around I am sticking with this. It also reflects my earlier literature searching tendencies to print out things that look interesting without necessarily doing sufficient investigation to find the critical highly-cited papers in a particular domain; but enough of that.

This paper was presented at AAMAS 2005, which I attended. It describes a number of problems associated with centralized location services
the middle agent that supplies the directory services becomes one of the system failure points and/or communication bottlenecks
although no academic or industry systems are cited. I get the feeling that the problems described are based on observations of toy systems and simulations rather than on experience with really large scale systems. I may be wrong, but the paper is not really providing me re-assurance to the contrary. The paper mentions the P2P approach, but only the flooding model is considered rather than anything more sophisticated like a distributed hashtable (DHT) although this is understandable given the year of publication. Compared to simulations of P2P systems, the testing of simply random and grid networks seems a little overly simplistic. Furthermore the efforts at generalizability are restricted to a fixed number of repeat runs rather than an assessment of the number of runs needed to achieve a particular confidence level. Of course the same criticism could be leveled at most P2P simulation studies.

The main conclusions of the paper are as follows:
  1. the response time of a distributed location mechanism is significantly better than the response time of a centralized one, in particular for large scale MAS (see fig 1 above). This result does not hold, however, in capability-deprived MAS, where a centralized mechanism will perform better.
  2. it is evident that a centralized location mechanism is very sensitive to workloads. At a medium to high load, in particular in large MAS, the centralized mechanism will perform poorly, whereas the distributed one will hardly be affected.
  3. the advantages of the distributed solution do come at the cost of a communication overhead
Although it feels like these follow logically from the definition of distributed and centralized location systems. It is not clear that we necessarily need simulations to confirm these assertions. So overall I don't come away thinking that P2P researchers would find much of interest in this paper, except in so far as to see the parallel concepts in the two fields, but I have to concede that this may be the right paper for some agents researchers to read in order to see the applicability of the P2P approach. So let us say that I was slightly disappointed by the level of this paper, but it would be unfair to judge the potential contribution of agents to P2P based on this one paper, and I think my overall opinion is turning. It might be helped by reading Shehory (2000), but right now I am going to forge on with the other papers I have printed out.

ResearchBlogging.org
David Ben-Ami, & Onn Shehory (2005). A comparative evaluation of agent location mechanisms in large scale MAS Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems, 339-346 DOI: 10.1145/1082473.1082525

References
Barabasi, A. L., Albert, R., "Emergence of Scaling in Random Networks" (Cited by 6145). Science, page 286(509), 1999.
David Ben-Ami, Onn Shehory, "Evaluation of Distributed and Centralized Agent Location Mechanisms", Proceedings of the 6th International Workshop on Cooperative Information Agents VI, p.264-278, September 18-20, 2002
Clarke I., Sandberg O., Wiley B., Hong T. W., "Freenet: A Distributed Anonymous Information Storage and Retrieval System" (Cited by 1760). Proceedings of the ICSI Workshop on Design Issues in Anonymity and Unobservability. Berkeley, CA, 2000.
Decker K., Sycara K., Williamson M. "Middle-Agents for the Internet" (Cited by 398) .Proceedings of IJCAI-97, pages 578--583, Nagoya Japan 1997.
Dimakopoulos V. V., Pitoura E., "A Peer-to-Peer Approach to Resource Discovery in Multi-agent Systems" (Cited by 13). Proceedings of. CIA 2003: pages 62--77.
Michael R. Genesereth, Steven P. Ketchpel, Software agents, Communications of the ACM, v.37 n.7, p.48-ff., July 1994
Gibbins, N. and Hall, W. "Scalability Issues for Query Routing Service Discovery" (Cited by 19). Proceedings of the Second Workshop on Infrastructure for Agents, MAS and Scalable MAS (2001), pages 209--217.
Adriana Iamnitchi, Ian Foster, Daniel C. Nurmi, "A Peer-to-Peer Approach to Resource Location in Grid Environments", Proceedings of the 11 th IEEE International Symposium on High Performance Distributed Computing HPDC-11 20002 (HPDC'02), p.419, July 24-26, 2002
Somesh Jha, Prasad Chalasani, Onn Shehory, Katia Sycara, "A formal treatment of distributed matchmaking" (poster), Proceedings of the second international conference on Autonomous agents, p.457-458, May 10-13, 1998, Minneapolis, Minnesota, United States
Koubarakis M., "Multi-agent Systems and Peer-to-Peer Computing: Methods, Systems, and Challenges" (Cited by 12). Proceedings of. CIA 2003 pages 46--61.
Kuokka D., Harada L., "Matchmaking for information agents" (Cited by 100). Proceedings of IJCAI-95, pages 672--679, 1995.
Elth Ogston, Stamatis Vassiliadis, "Matchmaking among minimal agents without a facilitator", Proceedings of the fifth international conference on Autonomous agents, p.608-615, May 2001, Montreal, Quebec, Canada
Onn Shehory, A Scalable Agent Location Mechanism, 6th International Workshop on Intelligent Agents VI, Agent Theories, Architectures, and Languages (ATAL),, p.162-172, July 15-17, 1999
Smithson A., Moreau L., "Engineering an Agent-Based Peer-To-Peer Resource Discovery System" (Cited by 3). In Gianluca Moro and Manolis Koubarakis, editors, First International Workshop on Agents and Peer-to-Peer Computing, pages 69--80, Bologna, Italy, July 2002.
Srinivasan N. et al., "Enabling Peer-to-Peer Resource Discovery in Agent Environment" (Cited by 1). Proceedings of Challenges in Open Agent Systems (AAMAS 2002), July 2002.
Stoica I., Morris R., Karger D., Kasshoek M. F., Balakrishnan H., "Chord: A scalable peer-to-peer lookup service for Internet Applications" (Cited by 6651). Technical Report TR-819, MIT, March 2001.
Vitaglione G., Quarta F. and Cortese E., "Scalability and Performance of JADE Message Transport System" (Cited by 38). Proceedings of the AAMAS Workshop on AgentCities, Bologna, 2002.
Watts, D. J., Strogatz, S. H, "Collective Dynamics of 'Small World' Networks" (Cited by 7766). Nature, 393: pages 440--442, 1998.
A Taxonomy of Middle-Agents for the Internet, Proceedings of the Fourth International Conference on MultiAgent Systems (ICMAS-2000), p.465, July 10-12, 2000
Yolum P., Singh M. P., "An Agent-Based Approach for Trustworthy Service Location" (Cited by 13). Proceedings of the 1st International Workshop on Agents and Peer-to-Peer Computing, Bologna, Italy 2002.
Bin Yu, Munindar P. Singh, "A Social Mechanism of Reputation Management in Electronic Communities", Proceedings of the 4th International Workshop on Cooperative Information Agents IV, The Future of Information Agents in Cyberspace, p.154-165, July 07-09, 2000

Friday, June 26, 2009

Kim (2008) The Role of Task-Induced Involvement and Learner Proficiency in L2 Vocabulary Acquisition

Cited by 1 [ATGSATOP]

Another paper that I am reading as part of a meta-analysis of second language vocabulary learning. I had started to read this and then paused for three weeks while I read three background theoretical papers (Laufer & Hulstijn, 2001; Hulstijn, 2001; Hulstijin, 2003) that made this one much easier to understand.

This paper is an experimental study in two parts designed to test L&H's involvement load hypothesis. One concern is control of time on task, since this varied in L&H's experimental attempt to assess involvement load hypothesis. Knight (1994) apparently brings this issue up in general for things like dictionary look up tasks. All through I was concerned with precisely how vocabulary knowledge was being measured. Like Folse (2006) Kim used the Vocabulary Knowledge Scale (VKS; Paribahkt & Wesche, 1993) but I still wonder what L&H used - later on it is described as providing L1 translation or English explanations. Laufer's (2003) experiment gave support for different performance based on different levels of involvement load, however another experiment in the set gave varying performance for three tasks that were supposed to have the same involvement load (distribution was different?). Am keen to know Laufer's explanation of that - that paper also on our reading list?

Laufer (2001) apparently indicates that involvement load construct should generalise from textual to face to face audio situations, which I had assumed, but good to be able to reference that assertion given the wide range of studies we are applying the concept to.  I was unsure of the meaning of interactionally modified input versus interactionaly modified output, and in particular the concept of premodified input, although this is in the context of L&H(2001) that I guess I should be reading.

I was concerned about the random assignment implications of the split between the two experiments. One of the experimental groups from the first experiment is compared with a group constructed for the second experiment, which I think was run subsequently, and although similar had a slightly different mix of ages and nationalities.

Another concern is that it seems we could explain results independently of involvement load. In the reading condition the learners attention is only drawn to the target words through emphasis and glossing. In the gap-fill condition the learners attention is drawn to 15 words, and in the composition and sentence writing conditions the learners attention is drawn to the 10 words they will be tested on. Purely in terms of attention one might expect to see the results that were achieved. In the experiment that tested the three different involvement load levels, the immediate post test only distinguished the composition group as significantly higher, while the delayed post test distinguished all three - there was no interaction or main effect for proficiency level. The second experiment made no distinction between the composition and sentence-writing tasks. I had been wondering earlier if the results could all be explained in terms of receptive/productive or active/passive differences, although the significant difference between reading and gap-fill at post-test could not, but now I realise that there were 15 words being brought to attention in the gap-fill task, it seems that the results can all be explained in terms of attentional resources. Another question is whether the comprehension questions needed understanding of the target words in order to be answered (looking at appendix b I would say not really).

I am concerned about the bias of using the VKS tests, and the author expresses some concerns as well. I find the alleged pedagogical implications sit uneasily with me, since I am not sure that showing a benefit on a VKS test necessarily indicates that the learner has gained something of importance.  The key problem here is that the VKS sentence generation task could represent various sorts of ability on the part of the learner, e.g. that they memorized a sentence containing the word versus actually generating a novel sentence.  In particular it seems that if a learner was specifically practicing sentence generation or doing essay composition for a particular set of vocabulary that this would increase performance on the test through a practice effect.  It seems to be obvious that practicing a productive skill would lead to higher performance on productive tests, whereas practicing a receptive skill would lead to benefits on receptive tests.  The question I would like to know the answer to is what kind of transfer do we get cross-task, and thus motivational concerns aside, what is the most efficient approach to take to maximise ability on both receptive and productive tasks.

Reading proofs of our soon to be publshed paper on vocabulary study (Joseph et al. 2009) I am struck that as we discuss how to make tests more and more challenging, we are not addressing the goal of the language learner. We are arguing that gradually more challenging tasks maintains motivation and boosts long term retention, but the real question should be what is the long term task that the learner wants to succeed at. Clearly looking up a word in a dictionary can help a learner understand a sentence they are reading. The question is then whether other activity related to that word should be undertaken. The usual argument in L2 is that if nothing else is done then exposure to low frequency words will be insufficient for the learner to avoid having to look the word up again in future. I guess the real question is whether some sort of "artificial" re-exposure to the word will be a more efficient way of increasing the likelihood of future sentence comprehension, versus using that same time to just do more reading ... and what kind of experiment could actually test which approach was more efficient? I guess one could have learners perform a reading comprehension task, and then have one group perform another reading comprehension task, while a second group did vocabulary review, and then both groups would be tested on another reading comprehension task that was of comparable level and contained similar words. So for this kind of experiment we would need three different texts of comparable length, involving the same "target" vocabulary?

Depending on the results of such an experiment an argument could be made to say that although explicit vocabulary study was not recommended, that selection of subsequent texts for additional comprehension practice could be selected based on which words were looked up by a learner, in order to increase the chances of a rewarding experience - which is linked to overall motivation issue, i.e. should the learner be reading anything other than texts they specifically select themselves?

[A great deal of research has shown that when learners study definitions alone their ability to comprehend text containing the target words does not improve (Graves, 1986; Stahl & Fairbanks, 1986)] from Joseph et al. (2009), so I wonder if doing essay composition, or gap filling leads to improvements in text comprehension.

[N.B. The Kim paper also references some more studies showing the importance of negotiation that I was previously associating with Newton (1995), i.e. de la Fuente (2002) and Joe (1995, 1998) although latter focused on generative rather than negotiated tasks?]

ResearchBlogging.orgKim, Y. (2008). The Role of Task-Induced Involvement and Learner Proficiency in L2 Vocabulary Acquisition Language Learning, 58 (2), 285-325 DOI: 10.1111/j.1467-9922.2008.00442.x

My References

Joseph S.R.H., Watanabe Y., Shiung Y.-J., Choi B. & Robbins C. (2009) Key Aspects of Computer Assisted Vocabulary Learning (CAVL): Combined Effects of Media, Sequencing and Task Type. Research and Practice in Technology Enhanced Learning. 4(2) 1-36. 

Kim's References
Arlov, P. (2000). Wordsmith: A guide to college writing (Cited by 3). Upper Saddler River, NJ: Prentice Hall.
Barcroft, J. (2002). Semantic and structural elaboration in L2 lexical acquisition (Cited by 34). Language Learning, 52(2), 323–363.
Baddeley, A. D. (1978). The trouble with levels: A reexamination of Craik and Lockhart (Cited by 190)’s framework for memory research. Psychological Review, 85, 139–152.
Brown, T. S., & Perry, F. L., Jr. (1991). A comparison of three learning strategies for ESL vocabulary acquisition, TESOL Quarterly, 25, 655–671.
Cho, K-S., & Krashen, S. (1994). Acquisition of vocabulary from the Sweet Valley Kids Series: Adult ESL acquisition. Journal of Reading, 37, 662–667.
Craik, F. I. M., & Lockhart, R. S. (1972). Levels of processing: A framework for memory research (Cited by 3428). Journal of Verbal Learning and Verbal Behavior, 11, 671–684.
Craik, F. I. M., & Tulving, E. (1975). Depth of processing and the retention of words in episodic memory (Cited by 1346). Journal of Experimental Psychology; General, 104, 268–294.
de la Fuente, M. J. (2002). Negotiation and oral acquisition of L2 vocabulary: The roles of input and output in the receptive and productive acquisition of words. Studies in Second Language Acquisition, 24, 81–112.
Ellis, N. C. (2001). Memory for language (Cited by 97). In P. Robinson (Ed.), Cognition and second language instruction (pp. 33–68). Cambridge: Cambridge University Press.
Ellis, R., & He, X. (1999). The role of modified input and output in the incidental acquisition of word meaning (Cited by 0). Studies in Second Language Acquisition, 21, 285–301.
Ellis, R., Tanaka, Y., & Yamazaki, A. (1994). Classroom interaction, comprehension, and L2 vocabulary acquisition (Cited by 19). Language Learning, 44, 449–491.
Howell, D. C. (2002). Statistical methods for psychology (Cited by 3067) (5th ed.). Pacific Grove, CA: Duxbury.
Hulstijn, J. H., Hollander, M., & Greidanus, T. (1996). Incidental vocabulary learning by advanced foreign language students: The influence of marginal glosses,
dictionary use, and reoccurrence of unknown words
(Cited by 185). The Modern Language Journal, 80, 327–339.
Hulstijn, J. H., & Laufer, B. (2001). Some empirical evidence for the involvement load hypothesis in vocabulary acquisition (Cited by 91). Language Learning, 51, 539–558.
Joe, A. (1995). Text-based tasks and incidental vocabulary learning (Cited by 44). Second Language Research, 11, 149–158.
Joe, A. (1998). What effects do text-based tasks promoting generation have on incidental vocabulary acquisition (Cited by 62)? Applied Linguistics, 19, 357–377.
Knight, S. M. (1994). Dictionary use while reading: The effects on comprehension and vocabulary acquisition for students of different verbal abilities (Cited by 150). Modern Language Journal, 78, 285–299.
Laufer, B. (2000). Electronic dictionaries and incidental vocabulary acquisition: Does technology make a difference (Cited by 20)? In U. Heid, S. Evert, E. Lehmann, & C. Rohrer (Eds.), EURALEX (pp. 849–854). Stuttgart: Stuttgart University Press.
Laufer, B. (2001). Reading, word-focused activities and incidental vocabulary acquisition in a second language (Cited by 15). Prospect, 16(3), 44–54.
Laufer, B. (2003). Vocabulary acquisition in a second language: Do learners really acquire most vocabulary by reading (Cited by 44)? Some empirical evidence. Canadian Modern Language Review, 59, 567–587.
Laufer, B., & Hulstijn, J. H. (2001). Incidental vocabulary acquisition in a second language: The construct of task-induced involvement (Cited by 150). Applied Linguistics, 22, 1–26.
Luppescu, S., & Day, R. R. (1993). Reading, dictionaries and vocabulary learning (Cited by 99). Language Learning, 43, 263–287.
Nassaji, H. (2002). Schema theory and knowledge-based processes in second language reading comprehension: A need for alternative perspectives (Cited by 46). Language Learning, 52(2), 439–482.
Nation, P. (2001). Learning vocabulary in another language (Cited by 807). Cambridge: Cambridge University Press.
Newton, J. (1995). Task-based interaction and incidental vocabulary learning: A case study (Cited by 39). Second Language Research, 11, 159–177.
Paribakht, T. S., & Wesche, M. (1993). The relationship between reading comprehension and second language development in a comprehension-based ESL program (Cited by 84). TESL Canada Journal, 11, 9–29. Language Learning 58:2, June 2008, pp. 285–325
Paribakht, T. S., & Wesche, M. (1997). Vocabulary enhancement activities and reading for meaning in second language vocabulary acquisition (Cited by 136). In J. Coady & T. Huckin (Eds.), Second language vocabulary acquisition: A rationale for pedagogy (pp.174–200). Cambridge: Cambridge University Press.
Pulido, D. (2003). Modeling the role of second language proficiency and topic familiarity in second language incidental vocabulary acquisition through reading (Cited by 38). Language Learning, 53(2), 233–284.
Read, J. (2000). Assessing vocabulary. Cambridge: Cambridge University Press.
Rott, S. (2004). A comparison of output interventions and un-enhanced reading conditions on vocabulary acquisition and text comprehension (Cited by 1). The Canadian Modern Language Review, 61(2), 169–202.
Rott, S., Williams, J., & Cameron, R. (2002). The effect of multiple-choice L1 glosses and input-output cycles on lexical acquisition and retention (Cited by 20). Language Teaching Research, 6, 183–222.
Stahl, S. A., & Clark, C. H. (1987). The effects of participatory expectations in classroom discussion on the learning of science vocabulary (Cited by 20). American Educational Research Journal, 24(1), 541–555.
Waring, R., & Takaki, M. (2003). At what rate do learners learn and retain new vocabulary from reading a graded reader (Cited by 46)? Reading in a Foreign Language, 15(2), 130–163.
Wesche, M., & Paribakht, T. S. (1996). Assessing second language vocabulary knowledge: Depth vs (Cited by 7). breadth. Canadian Modern Language Review, 53, 13–39.

Thursday, June 25, 2009

Fighting with Android Layouts - lack of row spanning

So I've been trying to layout a couple of icons, two pieces of text and an image in an Android GUI, and have so far been thwarted from achieving the effect that I desire.

I've tried various arrangements and I still can't get what I want. My initial layout was just to use a simple table layout like the one on the left, which with android:shrinkColumns="1" means that the sentences get wrapped which looks good.

Results from emulator in image on left(translation sound icon is hidden when not available). My only complaint is that the image doesn't span two rows leaving a lot of blank space between sentences. Unfortunately it seems that Android tables don't support row spanning according to this Google groups post.

Next I tried a table within table layout as shown on the left, but try as i might with different combinations of stretch and shrink columns, either the image would get knocked onto the next row (as shown), or when it wasn't the bottom of the text would be cut off.

After that I tried experimenting with RelativeLayouts, where you can specify that one item should be positioned relative to another using syntax like android:layout_toRightOf = "@id/sentence_sound".

My first attempt - shown further down on the left, used a single RelativeLayout inside a table (tables seeming to be the only way to get the text to wrap), and that was probably the best result, given that the translation sound was absent.

You can see from the black image the way the translation sound doesn't line up with the translation from the eclipse GUI. Given the translation sound is usually absent in the content I am dealing with I will go with that for the moment.

The reason for giving up there is that what I thought was my clever final approach - to nest relative layouts, so we had one for the sentence and sound, and a second for the translation and sound, and place one above the other - leads to the complete disappearance of the translation, even though the hierarchy viewer claims that it's there.

I guess what I am attempting to do is a bit of a corner case, but it has been frustrating not to get the layout just as I wanted - almost as frustrating as trying to get all these images lined up in this blog post :-)

Even with these images I would be surprised if anyone can really understand what I'm doing. For replication, I really need an easy way to associate the XML layout files with each imsge. I'll happily provide those if anyone thinks they know a solution.

Monday, June 22, 2009

Koubarakis (2003) Multi-Agent Systems and Peer to Peer Computing: Methods, Systems and Challenges

So I must have read this paper about 3 or 4 times now.  It was originally recommended to me by Gianluca Moro as being a good paper to read about what MultiAgent Systems (MAS) research might have to offer peer to peer (P2P) research.  In Koubarakis' opinion the Agents field was slower to pick up tools and techniques from the P2P community than some other fields and that this was disappointing given that:
deployed P2P systems can be considered an interesting case of MAS as pointed out originally [by Finin & Labrou (2000)].
Koubarakis further suggests specifically that:
MAS can readily offer concepts and techniques that can be useful to P2P computing at the application modeling and design level (e.g., ontologies for describing network resources in a semantically meaningful way, protocols for meaning negotiation, P2P system modeling and design methodologies etc.).
Interestingly I was at an Agents conference in 2000 where Tim Finin spoke at the scalability workshop as to how Napster might be "improved" using ontologies, and I remember comments afterwards to the effect that Napster was doing just fine without ontologies.  What I keep coming back to here is the question of whether ontologies and sophisticated protocols for meaning negotiation would provide any short-term benefit for P2P system users and developers.  I have several layers of comments about this in the paper from my various passes over this point, but the summary is that a sophisticated layer of middle agents on which to base applications could allow developers to avoid re-implementing lower layers again and again, but that any individual developer is not going to have much patience with that kind of system if they don't see some immediate benefit.  In addition in the first instance there are likely to be drawbacks in as much as P2P applications need to operate blindingly fast to produce the best results for the end user, and as such P2P protocols are simple and robust.  Support for complex negotiations seems like it would slow things down.

Nevertheless, Koubarakis presents a review of what P2P research might offer MAS research and vice versa.  The former makes sense to me; P2P systems can offer look up services that agent systems use, and basically be an infrastructure component for MAS.  It is in the latter case that I still struggle with understanding the benefits, i.e. what agent research can offer to P2P developers.  I guess it is important to distinguish between P2P researchers and P2P developers, but I will attempt that elsewhere.  Koubarakis' first example is of the application of agent based software engineering methodologies to P2P systems.  I think this is one that I glossed over in earlier readings, and now that I look up the referenced paper by Bertollini et al. (2002) I see that the Tropos software engineering methodology (from a very quick skim) allows a diagrammatic summary of the Napster and Gnutella architectural designs, which they then abstract away from to produce a generic peer to peer virtual community pattern which can then be used to support the implementation of particular P2P solutions using the JXTA framework (JXTA forums are still active, but not clear what the status of that project is, particularly since Oracle bought Sun).  Interestingly the example implementation used comes from health care, which is an application domain I compared Agents and P2P in myself (Tse et al., 2006).  I am not deeply familiar with the effectiveness of agent-oriented software design, but this does seem like an area where agent theory might have something to offer.  At least, some sort of formal approach could be helpful in the design of distributed systems.

Next up is the idea from Tim Finin; build ontologies on top of P2P systems.  The Edutella project is given as an example, although that project appears to have petered out; at least the Edutella website has not been updated since 2004, although there are more recent academic publications on Edutella.  Of course even if the Edutella project has not been a great success that doesn't mean there can't be some value to be derived from building ontologies on top of P2P systems, but I struggle to see what they are.  of course this relates to the whole question of the "Semantic Web", which I find a recent Tim O'Reilly post on.  O'Reilly is talking about rich data snippets that allow Google results to display more strucutre.  There is a whole bundle of ideas here, but I should try and finish my summary of Koubarakis' paper before straying into that territory.

Koubarakis specifically mentions the Semantic Web while refering to work by Karl Aberer on local onotologies and local translations among ontologies of neighboring peers.  Aberer's paper "The Chatty Web: Emergent Semantics Through Gossiping" is cited by 156 [ATGSATOP], and there certainly seems to be a rich research vein there.  I see some interesting articles looking at emergent semantics deriving from folksonomies - another area I have published in (Joseph et al., 2009).  A skim of Aberer's article indicates that the problem they are hoping to address is inter-ontology mapping so that, for example, one could send out a query to get project titles from multiple different data sources, where the meta-data format was potentially different in each case, e.g. multiple XML documents where in one case we have <project><title>My Project</title></project> and in another we have <project-title>Project X</project-title>.  Without reading that paper in more detail it is not clear to me to what extent schema/ontology authors have to provide mappings, and to what extent users are giving feedback on failed matches; but the authors reference other work on automated ontology matching which I guess is what this is all about.  Say I want to formulate my query for a flight and send it out to all the online travel sites, I don't have to force them to all use the same schema - there is some process that just handles the translation between the different terms used by each site so that everyone can agree on how to query on things like "departure time".  Still seems like the simple short term solution is to have translations provided by 3rd parties, if at all.  Not clear to me why the effort of automating ontology matching brings great bounty.

Next up is BestPeer, which apparently improves on a P2P system by adding mobile agents.  I have a long standing point (Joseph & Kawamura, 2001) about the unpredictable benefits of mobile agents, and I have the BestPeer paper on my reading list, so will discuss that in a future blog post.  The final work mentioned in the section on what agents could offer P2P systems is theoretical analysis of search in distributed agent systems by Shehory (1999), which I also have on my reading list, so more on that soon.  Overall I think I am being more and more persuaded that there is agent research that can inform P2P researchers, but the more complex question is whether agent research is useful for P2P developers.  My main gripe is that simply citing the list of properties that agents should have (e.g. autonomy, reactivity) etc. is not enough to explain their value.  One has to present mechanisms that support autonomy, reactivity and so forth, and then show how their use brings some specific benefit to the system they are being incorporated to.  I guess the alternative tack here is to say that the agent field has lots of analysis into the behaviour of systems comprised of multiple autonomous entities, and attempts at producing design guidelines to handle development of such systems.

The next section in the paper is on bottom-up approaches to MAS such as projects like DIET and BISON which are inspired by natural ecosystems.  This is certainly an interesting area of research and the suggestion seems to be that these lightweight multi-agents platforms could serve as a testbed for P2P systems, although it feels a little back to front given that the granularity of P2P systems is usually smaller than even the simplest multi-agent systems.  I think the challenge here is that laboratory based platforms like these are generally likely to be cut off from real P2P users, unless it achieves critical mass within the research community itself.  Clearly such things can be used as test-beds to provide theoretical results about distributed systems; but any P2P system that is hoping to be used by a non-trivial number of people is probably going to have to be built "close to the metal".  Again I am skirting up against this difference between P2P users, developers and researchers.

The final portion of this paper focuses on Koubarakis' own research of P2P publish and subscribe systems.  Koubarakis' approach is based on the idea that:
The next generation of P2P data sharing systems should be developed in a principled and formal way and classical results from logic and theoretical computer science should be applied
although that makes me think of a chapter in the book "The Next Fifty Years" where Paul Ewald talks about how in medicine fundamental achievements have occurred more through the testing of deductive leaps than by building-block induction, giving examples such as Edward Jenner's discovery of vaccination in the absence of knowledge of viruses as evidence that simply trying to understand the workings of disease at the cellular and biochemical levels may be insufficient to make great leaps.

Actually I'm not sure of the validity of my analogy here, since I had been thinking of Ewald's points as being related to the importance of accidental discovery versus theoretically informed developments, when actually they are slightly different, since the process of generating a hypothesis to test necessarily involves some theoretical input.  Although I think my concern stems from the plethora of available theories and the difficulty in assessing the extent to which different theories are experimentally grounded.  Developing P2P systems in a principled and formal way will certainly be attractive to those who are well versed in the principles and formal theories of computer science.  Having spent some time becoming more versed in them myself I am not convinced that they are purely virtuous.  I feel there is an extent to which theory can end up serving itself rather than serving the development of useful techniques and systems.

In conclusion Koubarakis cites results from his research where they calculate worst case upper bounds for the complexity of satisfying and filtering queries within their publish and subscribe networks.  I think a lot of my personal confusion in this area comes down to differentiating between systems that are simulations designed to provide support for theoretical results versus systems that are frameworks that one might hope to build applications for use in the real world.

One of the key things I realise re-reading all these papers is how I am not really interested in industrial software engineering.  I am not really interested in developing techniques that might be used in factories or supply chain management. I am interested in writing code that everyday end users (including myself) interact with.  It was the potential of the digital butler that got me interested in agents.  P2P systems and search engines were interesting because of the experience they delivered to the end user.  I think that's what I repeatedly struggle with regarding agents research - trying to find something of direct use to the end user.
ResearchBlogging.org
Cited by 12 [ATGSATOP]

Manolis Koubarakis (2003). Multi-agent systems and peer-to-peer computing: Methods, systems, and challenges Lecture notes in computer science, 2782, 46-61

References (my scholar system couldn't handle this papers reference format - didn't want to burn time on fixing that at the moment)

K. Aberer, P. Cudre-Mauroux, and M. Hauswirth. The Chatty Web: Emergent Semantics Through Gossiping. In Twelfth International World Wide Web Conference (WWW2003), May 2003.

D. Bertolini, P. Busetta, A. Molani, M. Nori, and A. Perini. Designing peerto- peer applications: an agent-oriented approach. In Proceeding of International Workshop on Agent Technology and Software Engineering (AgeS)-Net Object Days 2002 (NODe02), volume 2592 of Lecture Notes in Artificial Intelligence, pages 1–15. Springer, October 7–10 2002.

T.W. Finin and Y. Labrou. Napster as a Multi-Agent System. Presentation at the 18th FIPA meeting, University of Maryland Baltimore County, July 2000.

Joseph S.R.H. Yukawa J., Suthers D. & Harada V. (2009) Adapting to the Evolving Vocabularies of Learning Communities. International Journal of Knowledge and Learning.

Joseph S. & Kawamura T. (2001) Why Autonomy Makes the Agent.  In Agent Engineering, Eds. Liu, J, Zhong, N, Tang, Y.Y. and Wang P. World, Scientific Publishing.

O. Shehory. A Scalable Agent Location Mechanism. In Proceedings of ATAL 1999, pages 162–172, 1999.

Tse B., Raman P. & Joseph S. (2006) Information Flow Analysis in Autonomous Agent and Peer-to-Peer Systems for Self-Organizing Electronic Health Records In Agents and Peer to Peer Computing, Eds Joseph S.R.H., Despotovic Z., Moro G. & Bergamaschi S. Lecture Notes in Artificial Intelligence, Volume 4461.

Thursday, June 18, 2009

Csikszentmihalyi & Hermanson (1999) Intrinsic Motivation in Museums: Why Does One Want to Learn?

This is another paper that was recommended to me by Peter Leong who is teaching a course in Second Life this summer for the College of Education at the University of Hawaii. We are trying to better understand how we might build engaging learning spaces in Second Life.

Reading this paper I started wondering what proportion of the population went to museums. Superficially I imagine computer games and films/tv to be far more frequently consumed by the general population, although since having children I realise what a valuable resource museums are. Is going to the cinema more popular than going to the museum? I guess the big difference is whether you are asking your audience to sit in a chair or walk around, and whether they are hoping for thrills rather than to be made to think. One imagines that theme parks are more popular than museums, but again it would interesting to know the real statistics.

Csikzentmihalyi's concept of flow was mentioned in the McClelland (2000) paper I blogged about previously. Although it seems like Pine and Gilmore's experience realms diagram is a subdivision of flow, at least since reading Csikzentmihalyi's paper he mentions flow in the context of watching a basketball game, so the implication is that one can get sucked in to a state of flow for both passive and interactive experiences, and either absorptive or immersive experiences? However I am less clear about this latter dimension, I guess immersion is where you are totally immersed actively in a role, or in a passive appreciation of something. Funny as I would call that being absorbed, but absorption for P&G seems to be more about maintaining a distance from the thing you are observing, e.g. for an educational experience where you try and work out how something works.

Csikszentmihalyi & Hermanson (1999) distinguish extrinsic and intrinsic motivations. They argue that museums must rely on intrinsic methods of motivation. A good part of the paper is taken up describing the flow concept, and I the range of activities the authors suggest can induce flow are wider than I expected. Flow apparently relies on activities that have clear goals and appropriate rules:
Conflicting goals or unclear expectations divert our attention from the task at hand
However this is at odds with some of my experience programming, where I am changing goals as I explore different possibiilties. It seems to me there can also be a state of flow as I explore and reject possible goals, although I take the main point that this does require some shift in level of attention. Given clear specifications and little ambiguity one can become completely immersed in programming to specification, but sometimes the most interesting solutions come from questioning the goals and expectations to find alternative solutions to the real underlying problem.

The authors also mention that as skills increase, the challenges of the activity must increase to maintain flow, which reminds me of the suggestion in other motivations literature (Dornyei?) that tasks should be just hard enough and not too hard to maximize motivation, although I have yet to find any empirical studies which back this up. The authors also cite a number of references to support the assertion that affective processes can be as important as cognitive processes in learning, and this ties in to ideas about memory being strongest when things are linked to emotionally charged events. Further discussion of flow includes the assertion that:
when involved in the activity, the individual fully expresses the self
although sometimes when I am in a programming or writing flow I think I lose my "self". Overall the discussion of flow is interesting, but it only seems to tie weakly to the design challenge of museum exhibits. At least it is not clear whether the state of flow which I associate with much more focused activities is necessarily the state we should be trying to induce in museum visitors; although arguably it would be no bad thing to have visitors losing themselves in the exhibits. The authors provide the diagram shown above to indicate one approach to structuring exhibits or experiences at museums. There is the "hook" that piques initial interest, opportunities for involvement and then a set up for intrinsic rewards that hopes to stimulate flow. There are many good suggestions such as trying to connect exhibits to the individual visitor and presenting things as perspectives rather than fact:
Information that is presented as true without alternative perspectives discourages the motivation to explore and learn more
Although this is slightly ironic as the statement itself is one of fact rather than perspective. There are further discussions of the conditions for flow, such as the suggestion that displays should provide information by which visitors can compare their responses to other standard(s?), and that supportative environments provide people with choices, and acknowledge their perspectives or feelings; however I find these difficult to conceptualize without more concrete examples. However the authors do acknowledge that as yet there is:
no table where we can look up the elements that will attract the curiosity of difference types of visitors.
Just started having this idea about doing roleplays and paying SL residents to come in and be actors for some relatively low rate of Linden dollars. Arguably they would be more fun to interact with than scripted bots, would allow for the possibility of "unscripted" emotional interaction. Although of course the payment aspect might negatively effect the social interaction. What about tasks or role plays where you don't get paid unless a team works together. An extrinsic reward may undermine intrinsic motivation according to this paper, but museum's pay actors to form living exhibits in their museums ... I wonder if one could create something engaging ... could the activity be interesting enough that people would get involved anyway? I wonder what kind of games have been built in second life already? I don't hear much about that, but then I haven't looked. It seems like you could have a murder mystery, or ecological mystery - could have a subterranean level with dwarves or gnomes (or menehune) suffering from a disease and rather than just find a cure, you have to convince the dwarves to change their behaviour which requires talking to multiple NPCs and working out a convincing argument - all much easier to make believable if you can make the dwarves real people playing roles ... they can judge themselves whether they have been convinced by what the team of detectives come up with, and be in a position to hand out prizes (in a suitable story context) when they feel they have been convinced. Of course we also have to deal with new people popping in at different times, but could deal with that by restricting the number of entrances; could have a leader board of time to solve the problem - I guess the whole thing could reset after a certain time - but what we really want is to allow multiple actors to come in and have their teamwork be required to solve the problem - I guess there could be multiple versions of the game - one if there is only one avatar and other harder versions if there are more avatars around ...

Some related SL activity includes Second Life Singers, and also a Macbeth interactive experience, Virtual Hallucinations, and the OSU Medicine Testis Tour.

Also an interesting blog on Education in Second Life.

ResearchBlogging.org
Mihaly Csikszentmihalyi, & Kim Hermanson (1999). Intrinsic Motivation in Museums: Why Does One Want to Learn? The educational role of the museum By Eilean Hooper-Greenhill, 146-160

Cited by 70 [ATGSATOP]