Friday, November 16, 2012

Criticism's of Agile Development by Dennis et al

In their book "Systems Analysis and Design with UML" 4th Edition, Dennis et al discuss Agile Development, and mention the following criticisms:
  1. Today's business environment, where much of the actual information systems development is offshored, outsourced, and/or subcontracted does not fit well with agile development methodologies requiring co-location of the development team
  2. If agile development is not carefully managed, and by definition it is not, the development process can devolve into a prototyping approach that essentially becomes a “programmers gone wild” environment where programmers attempt to hack together solutions
  3. A lack of actual documentation created during the development of the software, raises issues regarding the auditability of the systems being created. Without sufficient documentation, neither the system nor the systems-development process can be assured.
  4. It is unclear whether agile approaches can deliver large mission-critical systems
My immediate reaction to point 1 is "Google Hangouts", "Skype Screenshare" etc.  It seems to me that one can work almost side by side with other people around the world now.  Regarding point 2, I think this is a danger for all software projects.  I don't know that I agree that agile methods are not carefully managed.  It would seem to me that careful management is an orthogonal issue to agile development.  You can have a large waterfall development process that is not carefully managed, and ends up generating reams of useless documentation, and you can have a carefully managed agile project that keeps the developers talking to the customers, and any documentation generated carefully in step with needs and software alike.  

Regarding point 3 agile methodologies emphasize working software over documentation, but that doesn't mean there should be no documentation.  Dennis et al appear to be arguing that some critical mass of documentation is required in order to perform an audit, and assure the system and systems-development process. I don't see why this should be the case.  Surely an audit can be performed on a working system as well as on documentation.  Futhermore, isn't the bigger danger that one has lots of documentation that is unrelated the actual system or indeed the actual needs of the users.  In this case what does the documentation help us assure in terms of the system or systems development process.

Regarding point 4 this seems like a open question.  One might well argue that large payroll systems, or systems like the mars rover must be developed much more carefully than a social media app for a mobile device, but in all cases if the resulting system fails to meet user needs then there are no winners. Personally. I am skeptical that the big design up front waterfall model necessarily produces good results on large mission-critical systems.

To be fair, Dennis et al go on to say:
Even with these criticisms, given the potential for agile approaches to address the application backlog and to provide timely solutions to many business problems, agile approaches should be considered in some circumstances.
So all credit to them for including the agile ideas in their systems analysis textbook.

Monday, November 12, 2012

Tom Gammarino's Free Writing Challenge

So as part of my game programming course, guest imagineer Tom Gammarino challenged my students to a 10 minute free writing challenge, where they write on any topic for 10 minutes without stopping, and I thought it important to take the challenge myself.  My ten minutes of writing without stopping became 15mins and I include it below.  I had planned to do something more narrative based, but it became more about some thoughts about games I am wondering about being able to create.  Either way it was fun to force writing like this - I wonder if I could do it for longer - I always remember that author who thought he was going to die (Anthony Burgess?) who tried to write all his remaining novels in a year and then didn't die and just carried on writing at some ridiculously prolific rate - anyway, want to try this again and force myself into some more narrative thing where I am describing something that is happening to some set of characters ...

There was a young man from nantucket, I like limericks and IO'd like to implement a computer system to generate them ,but I just don't seem to get time for these thethings during the normal course of he semester, and I want to create a computer game interactive narrative that explores the development of Japan during the Jomon period, which is a really interesting period since they seemed to develop advanced pottery before some other parts of the world ,who were still in the new stone age or something.  of course the astounding thing is the suggestion that we can really know anything about what people's lives were like during that time.  I'm reading these japanese history manga and the early lives seem pretty idyllic in terms of hunting and gathering and building small villages and starting to use agriculture, but pretty soon we get into the later yayoi and kofun periods and before long there is warfare and invading korean and things like that - seems like agriculture supported a huge increase in population and that ultimately led to people who had leisure time for warfare and inequality and things like that.  Of course before the start of organized hostilities there would have been lots of unpleasant situations like not managed to get enough to eat, and your kids dying from starvation, and getting injured going hunting and starving to death.  It would just be so fascinating to actually go back in time and watch some people living in one of these periods that we only really know anything about by archaeology and inference based on observing indigenous tribes - of course I guess the thing to do would be to become an anthropologist and go off and live with some of these pre-industrial tribes that still live in the amazon rain forest - I did take an anthropology course at edinburgh university one time and the instructor had lived in the rainforest with one group and talked about his experience of being a child, like they couldn't leave him along in the forest as he would get totally lost - he didn't have any independence and was totally reliant on the tribesmen to get him to places and to feed him, since he had none of the skills that allowed him to survive in the forest - of course it's the gradual introduction of technology and adoption of new techniques for getting food etc. that's just absolutely fascinating which is what we see in games like civilization and makes them real fun, although they do still seem to lack something of a narrative.  I ilke these japanese history comics because they give a bit more of a narrative sense, but it doesn't go quite far enough - they introduce some early historical figures like some queen who sent a messenger to mainlinad china in the 2nd century, but we don't really know hardly anything about her, and so they kind of sketch her out as a kind of intelligent young woman who was really good at predicting the weather, and who was raised up to be queen to avoid the men all fighting each other to be king, but we really don't know the politics of the events of the time, and I guess they are afraid to make them up since they are trying to be historically accurate books/comics things, but I think they could go further, but then I am also complaining that they don't really know how people lived although I haven't followed up on all the background sources that they might be using.  but there's a difference about developing a narrative.  There was a book, "clan of the cave bear" or something that explored that - it was like a story set in the stone age, and "American Gods" had some of that in it's little pre-history vignettes - I still think an interactive narrative in the Jomon would be cool, but I'd really like an underlying physics model so that game play was kind of open ended, e.g. you could make clay into pots and discover some early technologies, but in a more narrative oriented fashion than civilization, e.g. you can store more nuts and berries in your pots in order to make it through the winter, I mean get your little family to make it through the winter - I can imagine a really raw game where sometimes you'd have to make choices like leaving a sick child to starve in order to have your other healthy children survive - would that make for fun gameplay? I'm not sure - it's certainly a very dramatic issue and something that early humans likely had to face - there was a song I learnt in a language school in japan that was said to be dedicated to those children that had to be killed so their brothers and sisters might survive.  It seems so horrific in this modern age where we expect all of our offspring to make it to maturity - is there value from exploring these things?  perhaps in just making us appreciate how lucky we are now ...

Friday, October 26, 2012

Code Complete: High Quality Routines

So reading Steve McConnell's Code Complete chapter 7 on writing "High Quality Routines" I am struck that there is lots and lots of great advice in there about writing methods.  Particular things such as clear naming conventions, relatively short routine length, short sets of input parameters etc. are all excellent.  However the problem that I have with the chapter overall is that it feels like it is easy to read it and then not put much of the advice into practice.  While the chapter has lots of great example code snippets, I feel like it is easy to ignore the great advice, given that it's not being encountered as part of a larger project.

Maybe this is just me.  I'm a huge fan of working on things in a larger context, but I know that that's not everybody's cup of tea.  It has been argued that some people like to read the abstract theory associated with things before encountering examples in context.  If you're one of these people please do get in touch and let me know.  I'd like to learn more about your thought processes.

Anyhow, just as an example of some mixed routine practices from things I was programming today, I created one short routine
and grabbed another from StackOverflow:

http://stackoverflow.com/questions/8240637/javascript-convert-numbers-to-letters-beyond-the-26-character-alphabet

Now I've got a really awful vanilla name for the routine I created (processForm) and all sorts of hacked together pieces of things, but that's partly a reflection of the "tracer bullet" methodology I was working on here, which was to get something that would take a form submission from a Google App Script generated html page, and enter the resulting data into a spreadsheet as described in this StackOverflow post:

http://stackoverflow.com/questions/13086880/can-i-mix-jqueryui-and-google-app-script-form-submissions

At least I can be pleased that both routines are relatively short and have a very small number of input parameters.  I think that both could be improved with refactoring.  I'll present that in a future blog post, but right now I know I and my students are going to be using this code every week to support assignment submissions, and getting a feel for getting this working in Google App Scripts was the top priority that trumped every other consideration.  Not the best programming practice perhaps, but let's see how I can iterate and improve this over the next few weeks.

Google App Scripts really rocks, but I'm still trying to work out how to version and share all this stuff effectively so we can see the software engineering and systems analysis issues in the context of this larger project, as described in this other SO post:

http://stackoverflow.com/questions/12712593/should-google-app-scripts-be-stored-in-version-control-like-github


Automated Text Chat Assistants for Online Classes


I'm proposing this research project at my University in the hope of getting some course releases to spend more time on it. The proposed project concerns analyzing the text and email logs from a number of online computer science and multimedia communication classes in order to identify common students questions and concerns.  This data will be used to improve course design and automate the generation of frequently asked questions (FAQs) web pages.  Longer-term goals include the developing an instant messenger (IM) chatbot that can interact with students online when instructors are not available.

Background:
Researchers have studied the effects of making synchronous chat available in online classes such as Spencer & Hiltz (2003) who showed that students found classes with synchronous chat significantly more ‘rewarding’ and less ‘complex’ than classes with only asynchronous communication. This is not to say that asynchronous interaction such as that facilitated by blogs, emails, bulletin boards, wikis and so forth it not valuable.  In fact according to Johnson (2006) both synchronous and asynchronous forms of online discussion have advantages and there is evidence that both contribute to student cognitive and affective outcomes.

While the benefits of rich interactive tools for online learning are not in question, there have been relatively few assessments (although see Tomei, 2006) of the time taken by the online instructor to manage these systems.  The principle investigator of this project argues that much instructor time could be saved by the automated generation of class frequently asked question (FAQ) pages, as well as by providing multiple means to access this information, possibly including chatbot technology to interact with students over instant messenger (IM) systems such as Skype.

There have been a number of attempts to integrate chatbot technology into the online classroom over the years.  For example Heller et al. (2005) demonstrated that famous person chatbots (e.g. FreudBot) are promising as a teaching and learning tool in online education; while Mikic et al. (2009) developed the CHARLIE chatbot which can maintain a general conversation with students, showing them contents of courses, and asking them questions about learning material.  Even without a working chatbot there are studies (Kerly et al., 2006) using Wizard-of-Oz experiments that indicate this technology, if successful, could have widespread application in schools, universities and other training scenarios.

There are some positive results at the K-12 level (Kerly et al., 2007) however other authors (e.g. Knill et al., 2004) have cautioned that:

The teacher is the backbone in the teaching process. Technology like computer algebra systems, multimedia presentations or ‘chatbots’ can serve as amplifiers but not replace a good guide.

Despite these concerns recent researchers have experimented with humor enabled chatbots (Augello, 2008) and chatbots that support second language learning (De Gasparis & Florio, 2012; Jiyou, 2009; Sha, 2009); while yet others have demonstrated chatbot effectiveness in supporting network management training (Leonhardt et al., 2007) and security training (Kowalski et al., 2009). Furthermore in a detailed study Alencar & Netto (2011) developed an online education portal chatbot that successfully answered 69% of the students’ general questions about distance education.
In summary it seems that there is a great potential for the use of chatbot technology in the online classroom, but that one should be careful not to try and replace the human instructor entirely.  The proposed project is focused on providing automated online teaching assistants that will help the human online instructor manage their online classes. The principle investigators experience indicates that synchronous chat helps with student engagement and retention although as Wilging & Johnson (2004) point out students’ reasons for dropping out of an online program are varied and unique to each individual.  It is the principle investigators hope that well designed automated FAQs and IM chatbots can server to amplify the retention-enhancing effect of synchronous chat in online classes.
Clearly a cautious approach is required to develop systems that can successfully support online learning success.  This project plans to analyze the logs from several online classes using the ethnographic approach demonstrated by authors such Zembylas & Vrasidas (2007).

Work Plan:
Jan/Feb 2013: Analysis of CSCI 4702, 4705 and MULT 4702 chat logs
Feb/Mar 2013: Generation of 1st version of FAQs
Mar/Apr 2013: Automation of FAQ generation
Apr/May 2013: Link existing chatbot to FAQs to support usability testing over summer 2013

Work and Funding so far:
This project has not previously received any funding, but the principal investigator conducted an exploratory analysis of the chat logs during summer 2012.  A basic chatbot that answers simple questions has been created and deployed and further analysis and development is required to make something that can be of use in an online course.

No funding has been received or applied for from other sources.  The project is ongoing and will continue as long as the principal investigator continues to teach online classes.  However in the first instance a single course release is being requested to help work through the large volume of text chat that needs analyzing

Budget Justification:
The budget is for a single course release to provide the principal investigator with time to conduct the chat-log analysis and do code development based on the results.

Access:
No specific facilities are required.  The only requirement is IRB approval which is pending from the HPU CHS.

Dissemination:
Dissemination includes presenting the work at professional conferences such as the World Congress on Education or the International Conference on Advanced Learning Technologies, and ultimately in journals such as Knowledge-Based Systems and Distance Education.  The work would also be disseminated as an open source package and be promoted through the “Funniest Computer Ever” annual computer comedy contest. 
In the long term it would be good to see many educators making use of the techniques and tools developed from this project, and the plan is to disseminate the work as widely as possible through both academic, i.e. peer-reviewed publications and technical circles, i.e. an open source toolkit for online educators.

Deliverables:
Two conference paper submissions and working FAQ and chatbot prototypes by end of Summer 2013, for deployment in Fall 2013 classes.

Bibliography
  • Alencar, M. & Netto, J. (2011). Developing a 3D Conversation Agent Talking About Online Courses. In T. Bastiaens & M. Ebner (Eds.), Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2011 (pp. 1713-1719). Chesapeake, VA: AACE
  • Augello, A. (2008)Humorist Bot: Bringing Computational Humour in a Chat-Bot System. International Conference on Complex, Intelligent and Software Intensive Systems, 2008. CISIS 2008.
  • De Gasperis G.  & Florio N. (2012) Learning to Read/Type a Second Language in a Chatbot Enhanced Environment. International Workshop on Evidence Based Technology Enhanced Learning. Advances in Intelligent and Soft Computing, 2012, Volume 152/2012, 47-56
  • Heller, B., Proctor, M., Mah, D., Jewell, L. & Cheung, B. (2005). Freudbot: An Investigation of Chatbot Technology in Distance Education. In P. Kommers & G. Richards (Eds.), Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2005 (pp. 3913-3918). Chesapeake, VA: AACE.
  • Jiyou J. (2009) CSIEC: A computer assisted English learning chatbot based on textual knowledge and reasoning. Knowledge-Based Systems. Volume 22, Issue 4, May 2009, Pages 249–255
  • Kerly, A., Ellis, R. & Bull, S. (2007). CALMsystem: A Conversational Agent for Learner Modelling, in R. Ellis, T. Allen & M. Petridis (eds), Applications and Innovations in Intelligent Systems XV – Proceedings of AI-2007, 27th SGAI International Conference on Innovative Techniques and Applications of Artificial Intelligence, Springer Verlag 89-102.
  • Knill, O., Carlsson, J., Chi, A., and Lezama, M. (2004). An artificial intelligence experiment in college math education. Preprint available at http://www.math.harvard.edu/knill/preprints/sofia.pdf.
  • Kowalski S., Pavlovska K. & Goldstein M. (2009) Two Case Studies for Using Chatbots for Security Training. World Conference on Information Security Education 2009.
  • Leonhardt, M.D. Tarouco, L., Vicari, R.M.,Santos, E.R. & dos Santos da Silva, M. (2007) Using Chatbots for Network Management Training through Problem-based Oriented Education. Seventh IEEE International Conference on Advanced Learning Technologies, 2007. ICALT 2007.
  • Shaa G. (2009) AI-based chatterbots and spoken English teaching: a critical analysis. Computer Assisted Language Learning, Volume 22, Issue 3.
  • Spencer, D. H., & Hiltz S. R.  (2003) A field study of use of synchronous chat in online courses. Paper presented at the 36th Hawaii International Conference on System Sciences, Big Island, HI, January. http://csdl2.computer.org/comp/proceedings/hicss/2003/1874/01/187410036.pdf
  • Tomei, D.L. (2006). The Impact of Online Teaching on Faculty Load: Computing the Ideal Class Size for Online Courses. Journal of Technology and Teacher Education, 14(3), 531-541.
  • Zembylas, M & Vrasidas C. (2007) Listening for Silence in Text-Based, Online Encounters. Distance Education; May 2007; 28, 1.


Friday, October 19, 2012

Monkey Business Board Game with my Kids

So as described in yesterday's post I sat down with my kids last night and played a board game called "Monkey Business", by the Early Learning Center people.  This game as you can probably see from the image involves hanging plastic monkeys on a tree like thing.  The tree canopy is attached to the tree with a magnet.  When enough monkeys are hanging the magnet detaches and drops the monkeys into a pool of crocodiles at the bottom. Each player takes a turn to spin the dial to see how many monkeys they have to add, and I think the person who adds the monkey that "breaks the camel's back" so to speak, is the loser.  I worry about that a little for my boys since the focus in the game is on a single loser ...

I played this game with my three-year old boys Jack and Arthur, and it proceeded reasonably well.  We took turns hanging monkeys from the tree.  Occasionally individual monkeys would fall off, without the entire tree coming down, and there wasn't anything in the rules to cover this.  We didn't finish the game properly as after about 6 turns Arthur got frustrated and knocked the canopy down, which was a shame, as I have yet to actually play the game to completion :-)

Part of Arthur's frustration might have been related to the previous set up, which was that boys all had to have showers and be completely ready for bed before they could 1) do their homework, and 2) play games.  My seven year old son Luke had mathletics homework from school, which he enjoys, so he rushed down after showering to do that on the computer, and by the time the twins were down as well he had finished that, and all three boys moved happily on to their Kumon Japanese alphabets homeworks.

Now, while the boys quite like their Japanese alphabet homeworks (in fact Luke asked to do it when he woke up this morning), I have been setting a trend of letting them play on tablets after they do it. So Luke was still working on his Kanji, and Jack & Arthur had finished their Hiragana work when I suggested the monkey business tabletop game.  It was greeted with enthusiasm by the twins.  However during play I think I may have been occasionally distracted giving support to Luke doing his Kanji homework, and I think Arthur was thinking "I really want to get on to playing with the tablets and I've had enough of monkeys, and Daddy's not completely focused on our game", when he decided to knock everything over.  I was rather upset.  I told him he couldn't use the tablet now, and he was also upset.

The evening ended with Luke finishing his Kanji homework and us all playing 4 way Wii Mariokart on our wall projector.  Initially Arthur was still in a bad mood, but he actually made great progress controlling his kart, and finished two or three races for the first time.  Luke beat me in almost every race, and Jack got frustrated that he couldn't finish the course when Arthur was managing it.  He needed some scaffolding and reassurance that everything was okay, and it would take time to master MarioKart.

So in summary, I'm glad I got the board game out and want to do it more.  Any recommendations for good board games for a dad and three boys aged 7, 3 and 3?


Thursday, October 18, 2012

London Educational Games Meetup Rocks!

I very much enjoyed attending the London Educational Games Meetup last night.  They had six people present their games and platforms and things. First Laurent Arhuro presented flashapps, which looked like a nice set of mobile games for kids.  I didn't get a chance to ask him whether he had any particular pedagogical approach for these apps.  I have my kids playing all the free learning apps, so I'm not sure I'm going to immediately check out his apps since they cost money (am I a bad person?) but I wondered why he didn't have free and premium versions ... also is flashapps a confusing name since flash is a technology that is now gradually dying and doesn't run on mobiles anyway.  Either way he gave some good insights to deploying on amazon, ios and google play and how children are always moving on to next thing.  My kids zap through game after game each 30 seconds or so.

Brian Egles presented Jaguar Maths in Motion which sounded awesome - teaching kids about abstract math through a simulated F1 competition.  The project involves collaborating with schools.  Actually I'd like my boys to be slightly less obsessed with cars, but that's another story.  I asked Brian if the software was available outside schools, and apparently it can be bought, but as we've already determined I have no money so I guess I won't explore that in more detail for a while.  What was fascinating was Brian's description of how it helps kids who are switched off on math - kids who are not interested in the more abstract stuff like measuring angles.  He said early use of shop stuff for addition, multiplication works fine in schools, but abstract stuff is hard for some who could do it but can't see the point.  Personally I tend to think that schools have it all back to front and should all be based on the summerhill model but again that's another story :-)

Michael Carter presented his board game take-off, which reminded me about the lack of multiplayer support for my three boys playing on their tablets.  It's board games tonight I swear!  I don't want my kids growing up unable to negotiate and take turns.  Anyway, I asked Michael if his game involved a die? Apparently not - you get a card with number of moves from 1 to 5 so there is some skill about landing on the triangles on the board that allow you move the central section back and forth which can block access to the hangers that you need to access of win.  That technique reminded me of mariokart - guys who are behind tend to have an advantage ...  Kirsten mentioned that Rob Harris has a playtesting board game meetup that might be could for Michael, and I recalled somebody tweeting recently about it being nice to move real pieces around on a real board for a change.  Board game renaissance anyone?

In the break my friends and I started wondering why there aren't more handwriting recognition apps on touchscreen tablets/phones.

After the break we heard from Peter Stidwell about his ethical thinking game Quandary which sounds like a more philiosophical version of civilization, but is free, so I'll see if I can get my computer game students to play it.  It's all about settling on new planet, negotiating with people and has a graphic novel format!  Will blog about it once I've played if I get a chance.

We then heard from Ash Cairo about Phone battles and despite the guns and fast food content it looked like there was an exciting multiplayer multiplatform system there.  Finally we heard from Julia Bateson from Excite-ed who was previously a school teacher but now runs programs in schools to help train kids to becomes game designers - sounds like what I do for 18+ year olds at Hawaii Pacific University :-)  She mentioned GameStarMechanic which looks even easier to use than Construct2, GameMaker, GameSalad and Stencyl, but costs money, so I guess I won't be using it soon.  What particularly resonated was her talking about how kids from all different groups could collaborate, e.g. arty, techy, wordy all the different combinations of skills going into games.  Totally!  Exactly what I try to do with my collaborative game programming and game design classes where I try to get designer and programmers to collaborate to produce games.

Phew, what an evening, and followed it up with a great-ish burrito - should have ordered the salsa diablo.  Sorry I couldn't make it to "The Boot" for drinks - three young kids and all ... maybe next time - well realistically in about 15 years :-)

Monday, October 1, 2012

Google Spreadsheet ImportRange not working for me ... :-(

So I was hoping to use the Google Spreadsheet "ImportRange" function to cross-link some spreadsheets in the hopes of automating assignment status updates for my students, but no such luck:

http://support.google.com/docs/bin/answer.py?hl=en&answer=155183

I've tried to replicate in spreadsheets created from scratch.  Here's the source spreadsheet:

https://docs.google.com/spreadsheet/ccc?key=0Aq72y9iq5_1UdGFBRmc1VDYyS1p2UHZtLTd3U3REcHc#gid=0

and here's where I'm trying to pull in the data to:

https://docs.google.com/spreadsheet/ccc?key=0Aq72y9iq5_1UdHVnNzQ4SHBIRU9BS3ZhM2YyZFBqWWc#gid=0

and I have cell A1 set to:
ImportRange("0Aq72y9iq5_1UdGFBRmc1VDYyS1p2UHZtLTd3U3REcHc","Sheet1:A1")
but I get this error:
"#REF! error: The requested spreadsheet key, sheet title, or cell range was not found."
which is the error described here in the support document mentioned above,  which is supposed to be the error I get if I am not added as a collaborator on the source document, but since I am the author of both spreadsheets that doesn't make much sense.  I have tried playing with the openness settings of both spreadsheets, setting them to view with link, and completely public, but this has no impact on the error I get above

In my actual work case I get a completely different error trying to do approximately the same thing:


and of course I've searched Google for answers, and found various posts that describe similar errors, but haven't helped me fix my issue:

https://productforums.google.com/forum/#!topic/docs/HiqmmeeV0WI (incorrect sheet ids/syntax)

http://productforums.google.com/forum/#!topic/docs/6lczh2QEEXQ (wrap in expand statement)

Google spreadsheets is really great, and I can get loads done in the bits that I can get to work, but I really hate getting stuck like this and burning an hour on something where I can't get access to a more detailed error message, and the user community problem and solution dialogues are not organized around the documentation like they are in PHP and MySQL.

What I really really really want is some google spreadsheet people on IRC to actually talk through the problem ...

Friday, September 28, 2012

Funniest Computer Ever Competition Deadline Extended

Well, so at least I'm back to blogging :-) I've been meaning to blog about the Funniest Computer Ever Competition for some time now, but I kept getting distracted with emails and other social networking sites.  The basic concept is this; let's have an annual computer comedy competition with prizes awarded to the computers and/or robots that produce the most humorous output.

The specific objective is to get to the point where the winner of the 2020 contest can out-perform a professional human standup comic in front of a live human audience.  Many will see clear parallels with the RoboCup competition (robot team beats world soccer champions by 2050) and the Loebner Prize (have a program fool humans into making them think you are a human) to name a few.  There are other great inspirations such as chatbotbattles and the entire of creation really.

Anyhow, I'm chatting about this on the indiegogo website where we managed to crowd raise prize money for the first year of the competition, and in the weekly London Online Course meetup, and in the facebook page etc.  To get to the point I'll try to avoid mentioning every connection strand here, but suffice it to say that the official rules are here:

https://github.com/tansaku/twss/wiki/Funniest-Computer-Ever-Rules

and the deadline for entries is being extended to November 1st, mainly to give us time to make the website look that little bit more professional.  We've had several entries already, but also requests for extensions, so it seems like the sensible thing to do.

The current web site is shown at the top right above.  It's a great placeholder, but I think it could be much much better.  I had thought that a Google Sites template would be a good way to start, but I haven't really had the time to make it look more professional, and while I wanted a sort of collaborative feel to it - the Google template announcements are not as usable/useful as I had hoped, and the Google Site login requirements don't seem to let people get involved as easily as they might on facebook or even on github.

Anyways, some kind students from Hawaii Pacific University have taken pity on me and are helping me to try and make it look a bit more engaging.  Oh yes, and before I forget, I should mention I really hope we can get a load more open source computational humor projects out of this ... more soon ...

Thursday, September 27, 2012

Seymour Papert Storms My Mind

So at Bret Victor's behest I am reading Seymour Papert's "MindStorms".  Perhaps because I am the father of three boys, or perhaps just because one of my key roles is currently being an "educator", I find  a number of the passages in the introduction to the book particularly moving, especially this one on "cultural toxins" and "mathophobia":

We shall see again and again that the consequences of mathophobia go far beyond obstructing the learning of mathematics and science. They interact with other endemic “cultural toxins,” for example, with popular theories of aptitudes, to contaminate peoples’ images of themselves as learners. Difficulty with school math is often the first step of an invasive intellectual process that leads us all to define ourselves as bundles of aptitudes and ineptitudes, as being “mathematical” or “not mathematical,” “artistic” or “not artistic,” “musical” or “not musical,” profound” or “superficial,” “in-telligent” or “dumb.” Thus deficiency becomes identity and learning is transformed from the early child’s free exploration of the world to a chore beset by insecurities and self-imposed restrictions.
 This sums up one of my recent concerns with institutional education as comes across from my experience with putting my children in schools and even in my interaction with "bleeding edge" education from "top" universities such as Stanford and Princeton.  Maybe I'm getting excessively sentimental in my old age, but the following from the introduction to Mindstorms actually made me cry:

I have seen hundreds of elementary school children learn very easily to program, and evidence is accumulating to indicate that much younger children could do so as well. The children in these studies are not exceptional, or rather, they are exceptional in every conceivable way. Some of the children were highly successful in school, some were diagnosed as emotionally or cognitively disabled. Some of the children were so severely afflicted by cerebral palsy that they had never purposefully manip-ulated physical objects. Some of them had expressed their talents in “mathematical” forms, some in “verbal” forms, and some in artistically “visual” or in “musical” forms.
It's been a challenging week :-) but lest my melodrama obscure the message, I think it is extremely unfortunate when we see our children in these terms.  My eldest son has recently been using the Mathletics system on his computer at school and now at home.  It's great to see him excited about Math, but the excitement seems to come a lot from the ranking and competitive component of mathletics, rather than from a real educational exploration.  I'm sure it is improving his fluency with numbers, but I would much rather he was playing with Papert's Logo Turtle or some modern equivalent so that he was learning to control the computer, rather than learning to better follow instructions loaded into the computer by other people.

Is Lego "Mindstorms" the key?  This last weekend I was involved in a team getting an Aldebaraan Nao Robot to pretend to be a Dalek, and my son got ever so slightly involved, but the slowness of the Nao programming, and also my lack of experience with it, preventing it from distracting my son from the more immediate rewards of a friend's Nintendo DS.  If anyone knows the best modern equivalent of the Logo Turtle let me know; otherwise I think my son is getting Lego Mindstorms for Christmas ...



Bret Victor blows my mind! Where does he get those wonderful toys?

Bret Victor's essay response to the introduction of a live coding enabled introduction to computer science at Khan Academy is a must read for any person who programs or teaches people to program.

Bret Victor's earlier video is also required viewing in my opinion:


I'm requiring it for my game programming students and am desperately trying to find some way for them to get access to the game development environment Bret showed us in the lecture above.  I emailed Bret Victor himself to ask for access, but was not surprised not to hear back.  Bret is clearly a man very much in demand, and I'm nobody :-)  

His latest essay with many chunked video examples hammers home the point about how to make programming understandable.  I just wish I knew what he is using to make his interface demos.  Perhaps he is intentionally keeping that a secret in order to encourage others to redesign their tool kits. I know of several projects inspired by his earlier video - I can imagine many more arising from this latest essay.  As an educator and a programmer I am frustrated that I don't have better access to the tools that Bret is describing.  I don't know if I have the requisite skills to help make it happen, but I want to help make those tools available to everyone.

The other thing is that I really want to see something like the inverse of what Bret shows where he demonstrates the process of abstracting code through a series of steps.  I want to see something more like his earlier example of creating bits of code by dragging and dropping shapes.  I've tried many many introductory programming packages, but still haven't found one (or created one) that allows the user to move an object through space and have the relevant bit of code pop up.  This I think would be so uber-useful for kids learning to code.  I just wish I had access to whatever toolkit Bret uses to mock up his UI ideas so I could quickly express this concept better.

Wednesday, September 26, 2012

Fun with Google App Scripts for Online Classes

Using Google App Script
So for a while I have been trying to work out if there was some way to have pages in Google Sites display content specific to the logged in user. I mainly wanted to do this so that students in my classes could look at a page in the Google Site I am using for a class, and see a personalized view of which assignments they had completed etc.

Previously I used a technique cribbed from a UH math professor which was to display a table of everyone's assignment statuses, anonymized using the last four digits of each students ID number. For a couple of years I did this in Google Sites by pulling in a view of a Google spreadsheet.

The Old Embedded SpreadSheet View
The advantage was that it gave everyone an overview of where everyone was in the class. The disadvantage was that it was difficult to read and see where you were up to as a student. I had one student make some specific criticisms, but whichever way I tried to import a Google spreadsheet it didn't seem to work very well. Before the start of this semester, I finally set some bounty on a question in StackOverflow:

http://stackoverflow.com/questions/12064019/how-to-have-google-sites-pages-display-things-specific-to-the-currently-logged-i

The simple answer was Google App Scripts, which I believe are a relatively recent development, but then again ... anyway I then wanted a simple example of using a Google App Script to pull some data from a Google spreadsheet and display it in Google Site using the Google Site login as a key to look up data in the spreadsheet:

http://stackoverflow.com/questions/12264255/example-display-of-google-spreadsheet-data-in-google-site-via-google-apps-script

For some reason this got a "-1" in StackOverflow :-( but I eventually worked it out myself.  Perhaps I was being lazy, but I really love to have working examples of the kind of thing I want to do before burning hours and hours discovering something can't be done.  So anyway I answered my own question on StackOverflow with a link to a working demo.

Certain things became clear, such as the only information I can get about a Google Sites user is their email and their timezone, and I think it is a Google Plus login rather than specifically a Google Sites login, but that is another story.  I don't clearly understand how, but it seems you can have another email address, e.g. yahoo or whatever, as your email login for Google.  Google Sites is a great tool in many respects, although not as flexible as Ning in terms of using JavaScript, but it's free and pretty reliable.  For me, now that I know how to use it, it is real easy, but it requires you to have a Google account to edit, and I have had real trouble inviting non-technical users in.

Anyhow, Google Spreadsheets is just great as a simple database, since it's got such a fantastically convenient spreadsheet editing ability, and you can use formulas etc. So anyway, I was excited to get a Google App Script grabbing user-specific data from a Google Spreadsheet and displaying it to a Google Site user.

It took me a while to find what I needed in the Google App Script documentation, which seems rather sparse at the moment, particularly in terms of what JavaScript syntax you can and can't use:

http://stackoverflow.com/questions/12279357/is-there-a-complete-definition-of-the-google-app-script-syntax-somewhere

You do all your Google App Script editing in the Google online editor which includes debugging and libraries and lots of things that I am just getting the hang of; but basically I was able to achieve what I wanted for this semester's courses with a display of assignment statuses based on the Google spreadsheets I am using to keep track of student's progress.

Some frustrations include that I can't log in as the students to check what they actually see, although I have now rigged up a bit of test in the script editor to check the output for each student, although I see it in debug form rather than the complete HTML page view so that's a bit of a pain.  Also my University has Google backed emails, so students might not see the assignments view if they are logged in with their University accounts.  The only available solution here appears to be laboriously asking all students for all the email addresses they might be logged in with, and adding them to the spreadsheet for each student so any login email will work as a key for the right data.


I would like to have a single script that I am editing, however the mechanism for DRYing out Google App Scripts is creating libraries, which is fine, but when a library is in development mode it takes a real long time to load (at least 10 seconds) which kills my debug cycle; so at the moment I am swapping the code back and forth between the two classes as I add features like support for optional assignments etc.

It's not ideal, it's not DRY and I have lost bits of code once or twice, but I have gradually refactored out the code so that it now works generically across weeks of assignments, and pulls the entire assignment specification from another sheet in the same spreadsheet.  Ultimately once it is stable perhaps I can fix it  as a versioned library ...


Another thing I have struggled with is that I can't catch errors and have to display them to the end user. At least I haven't worked out how to display errors more gracefully ...

Also worth noting is that the first time a user sees the script they will get a warning message from Google, and they need to accept the script that has been created by me as indicated by my email address "tansaku@gmail.com", which doesn't look very professional and may put some people off, but it only displays the first time, and after that it stays gone, so it's perhaps not such a big deal.

And of course I cannot end without a shout-out to Coursera/Udacity/EdX who's online courses have inspired me.  Their smooth javascript web interfaces for all of them are really what I'm aiming at.  I just don't want to have my courses in their closed boxes .... Freedom!!!!

Google Sites is now much more fully featured with the addition of Google App Script.  It's allowing me to develop a dynamic web application ON TOP of the existing Sites wiki and I basically outsource all the boring login/wiki stuff to Google.  The restrictions are some limitations on what I can do with the scripts and ultimately if this is to scale I will likely need to drop the Google Spreadsheet and move to some sort of cloud DB, but with relatively small class sizes, this is working okay for now, and I can't imagine how I could easily re-create the kind of interface I have to Google Spreadsheets.

Google's providing some great tools here, and I've been getting reasonably fast feedback on StackOverflow, but Google App Scripts are so new there's not much to find in Google searches, although I think that's changing.  I worry a bit about what kind of stack I'm buying into here, and I really wish Google had an IRC chat room where I could talk with the developers ...

TODO features

  • Allow completed weeks to auto-collapse, and be re-opened on request - complicated by Google's only allowing some JavaScript libraries - looks like jQueryUI is the way to go here
  • Allow students to upload their assignments directly through the script so there's no professor missing submitted work gaps - this looks like it is possible, but there are interface issues, particularly that while the google app script can generate HTML output it doesn't seem to be able to inherit the surrounding site CSS style, so at the moment it's not clear how I can get BOTH the convenience of being able to edit a page of assignments in google sites wiki syntax AND have appropriate upload and submission elements ...
  • One of the particularly frustrating things is that while one can pass URL parameters that are coming in at the top level through to the script, the script plugin box that goes in the wiki to add the google app script can't have any parameters added to it directly.  This is why I was forced to go down the library route to share code.  Really the only difference between the scripts in different classes is just the google spreadsheet ID, and that could easily be specified at the point that the Google App Script was plugged into the Google Site, very frustrating that I can't specify it there.  If I want to have lots of little file upload boxes in my assignment page I would need some way of indicating which part of the assignment they are - and the only way at the moment will be to create a script library and then a new script for each upload box - how tedious ...
  • Make the Spreadsheet dependency more robust.  At the moment I am relying on the Google Spreadsheet having a certain format with sheets having particular names.  If anyone else was going to use this I'd need to make it more robust ...
  • Support for peer assessment so students can enter assessments of each others work
  • Dynamically adjust the height of the Google app script plugin to fit the content I'm serving from the HTML templates
  • Get my features into getsatisfaction or uservoice ...

p.s. many thanks to Alexis Brille for the great tick and cross icons - and interestingly all images get pulled through a Google proxy so I guess I get my own CDN for free?  At least I haven't worried about re-sizing the image myself yet, but perhaps I should ...

Tuesday, June 19, 2012

Velmans & Schneider (2007) The Blackwell Companion to Consciousness


The Blackwell Companion to Consciousness (1405120193) cover imageSo I have hundreds of books on consciousness inherited from my father.  Currently I am reading Velmans & Schneider (2007) The Blackwell Companion to Consciousness, which is a collection of essays on consciousness.  My father made notes in the margin of the first several chapters.  Over a period of a number of Sunday afternoons when my boys have crashed out, I have been snatching glances at it.  I think I've finally got some coherent notes on the first three chapters.

Chapter 1: Frith & Rees (2007) "A Brief History of the Scientific Approach to the Study of Consciousness

On review the key question for me seems to be "how does mind emerge from matter?"

The difference between materialism and immaterialism seems impossible to prove (p.10) - seems like purely a matter of faith whether the "material" world exists or not.

Hick's (1952) demonstration that response time is proportional to information in signal, e.g. Log of #choices sounds is impressive (p.15) -- reminds me of my attempt to quantify meaning of a signal in terms of its ability to represent some simple world according to some representation.

Key question seems to be what is conscious awareness for (p.19/20).  If some reasoning can be performed unconsciously it can't be just for reasoning itself. Makes me think of learning to ride a bike and that becoming unconscious. They mention handling novelty.  If you can capture a rule you can make it unconscious, but if you have to juggle a set of things you haven't yet determined a rule for ...

Chapter 2: Tye (2007) "Philosophical Problems of Consciousness"

The problem of ownership seems like a red herring (p.24).  I think pain could be transferred, and ultimately who cares about categories and properties of physical things? If my leg is amputated and attached to someone else it stops being my leg and the sense data coming out of it gets interpreted in a new way by the new person.

It seems to me that physical items ARE perspectival (p.25).  Gold may be a particular element, but you have to experience it to know what it is.  Mary the scientist who doesn't see a rainbow is just like the blind and deaf man who is supposed to fully understand what lightning is.We can never completely understand the subjective experiences of others no?

The problem of the mechanism of consciousness seems more serious (p.26), but the problem doesn't seem to me to require better understanding of individual neurons, but of the behaviour of assemblies of neurons to produce conscious awareness

Unconscious zombies seem irrelevant if they are undetectably different (p.28) - similarly for china body.  My thinking here is very strongly influenced by the Mind's Eye book by Hofstadter and Dennet.

The problems of unity, transparency etc. all seem to be aspects of basic mechanism (p.32)

Chapter 3: Trevarthen & Reddy (2007) "Consciousness in Infants"

Infant and fetal developmental process all very interesting, but doesn't seem to relate at all to philosophical question of consciousness in terms of explaining contents of qualia.  I totally accept Vygotskyian intersubjectivity, but this motivation for behavioral development seems unrelated to philosophical issue of C in terms of how can conscious awareness arises from functional organization of the brain.  Of course a system that had a component for conscious awareness processing would not necessarily explain the contents of qualia, but might explain the sense of some processes that were conscious subsequently becoming unconscious.

That human consciousness is motivated by need for social communication is kind of a given, but it seems to me to come back to having a world model that includes a model of yourself, and I might now add that both those models needs be dynamic ...


Thursday, June 14, 2012

Random Sentence Generated from Skype Chat History

So I have been struggling to get skype chat histories from all of my online classes into text format.  Based on a cut and paste from the standard skype app I managed to get this random sentence which starts with the seed "I" and then chooses random words based on the probabilities of co-occurrence in the second half of my computer games class from fall 2011:
I am not try to be in my prototype is. Although it is the stream was attached to yield to make it was absolutely stupid http://www.tomshardware.com/news/AMD-ATI-Radeon-HD-7000-GPU,14253.html#xtor=RSS-181 hurr you playing like an automated submission system ...
I used the basic generation code from the NLTK and modified it per their suggestion to choose based on frequency rather than the most probable word.  I just dropped the code in pastebin and it looks like I can embed via a javascript call - cool!




The current file format is based on a direct copy and paste from skype chat, e.g.
[12/14/11 9:55:03 AM] Sam Joseph: I was able to dodge the missiles pretty effectively
However I just discovered that the skype log files are in sqlite format. Woot! So now I'll rework this to call the sqlite db directly - which will be much simpler, particularly since Skype was crashing when I tried to cut and paste any large chat histories.

Why, you might ask, are you doing this? And the answer is to try and improve on chatbots like Zarquon and also to run analyze on the extensive skype chats I have with my students - ultimately I hope to be able to completely replace myself with a skype chatbot!

Okay, now I really have to get back to my meta-analysis!

Thursday, May 31, 2012

Apple Frustrations ...

I spent a good couple of hours tearing my hair out over a bug in my iPad app the other day.  The key problem was that I was getting a crash, without detailed error information.  I knew the line in my code that seemed to trigger the issue, a performSegueWithIdentifier call, but the error was an "attempt to insert nil object at 0 on an NSArray, which I wasn't calling.  It seemed like the details of the stack trace were obscured behing the performSegueWithIdentifier call.

I was extremely pleased when the following stack overflow post helped me add some code to the AppDelegate that got the strack trace symbolicated and printed out in the debug window

http://stackoverflow.com/questions/7841610/xcode-4-2-debug-doesnt-symbolicate-stack-call

After that I could see that the problem was related to a NavigationItem in one of my controllers that I hadn't hooked up.  It was perhaps an understandable slip since as described in one of my previous posts I had to replicate one of my viewcontrollers to support a hierarchy browse since currently viewcontrollers can't segue to an instance of themselves - at least not using the storyboard.  So the problem was I had hooked up a navigation item in one controller, but not the replicant ...

I also just renewed by Apple iOS development program.  Interesting to discover that they will pull your apps if you don't renew, so I guess I am on the hook for years now.  Google's one of payment for android app publication seems increasingly good value by comparison.  What's particularly frustrating is that having accidentally upgraded my iPad and iPhone to 5.1 I can no longer deploy apps from Snow Leopard - I must now upgrade to Lion since Apple will not allow me to downgrade to 5.0 on my iOS devices, argh!

Monday, May 28, 2012

Python SVM on OSX 10.6.8

So I finally got some Python SVM love on OSX 10.6.8 by using the Python bindings for libsvm.  However that was after having spent half a day trying to build PyML and Scikit.  I had thought these "pure" python projects would be the simpler route to take.  I got stuck for a bit installing numpy and other supporting packages, but even having got these working I couldn't get PyML or Scikit to build.  The problem was errors like this:

compiling C++ sources
C compiler: c++ -fno-strict-aliasing -fno-common -dynamic -arch ppc -arch i386 -g -O2 -DNDEBUG -g -O3

compile options: '-I/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/include -c'
c++: sklearn/svm/src/libsvm/libsvm_template.cpp
llvm-g++-4.2: error trying to exec '/usr/bin/../llvm-gcc-4.2/bin/powerpc-apple-darwin10-llvm-g++-4.2': execvp: No such file or directory
lipo: can't figure out the architecture type of: /var/tmp//ccpSkuud.out
llvm-g++-4.2: error trying to exec '/usr/bin/../llvm-gcc-4.2/bin/powerpc-apple-darwin10-llvm-g++-4.2': execvp: No such file or directory
lipo: can't figure out the architecture type of: /var/tmp//ccpSkuud.out
error: Command "c++ -fno-strict-aliasing -fno-common -dynamic -arch ppc -arch i386 -g -O2 -DNDEBUG -g -O3 -I/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/include -c sklearn/svm/src/libsvm/libsvm_template.cpp -o build/temp.macosx-10.3-fat-2.7/sklearn/svm/src/libsvm/libsvm_template.o" failed with exit status 255

and

building 'PyML/containers/ext/_csparsedataset' extension
gcc-4.0 -fno-strict-aliasing -fno-common -dynamic -arch ppc -arch i386 -g -O2 -DNDEBUG -g -O3 -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c PyML/containers/ext/SparseDataSet_wrap.cpp -o build/temp.macosx-10.3-fat-2.7/PyML/containers/ext/SparseDataSet_wrap.o
In file included from /usr/include/architecture/i386/math.h:626,
                 from /usr/include/math.h:28,
                 from /Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/pyport.h:312,
                 from /Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/Python.h:58,
                 from PyML/containers/ext/SparseDataSet_wrap.cpp:149:
/usr/include/AvailabilityMacros.h:108:14: warning: #warning Building for Intel with Mac OS X Deployment Target < 10.4 is invalid.
/usr/libexec/gcc/powerpc-apple-darwin10/4.0.1/as: assembler (/usr/bin/../libexec/as/ppc/as or /usr/bin/../local/libexec/as/ppc/as) for architecture ppc not installed
Installed assemblers are:
/usr/bin/../libexec/as/x86_64/as for architecture x86_64
/usr/bin/../libexec/as/i386/as for architecture i386
/usr/bin/../libexec/as/arm/as for architecture arm
lipo: can't open input file: /var/tmp//ccyq6iNy.out (No such file or directory)
error: command 'gcc-4.0' failed with exit status 1


I did all the usual searches and played around with gcc settings and so forth, but generally to no joy.  I guess I should have posted the above with details to the relevant mailing lists, but having got libsvm to work I just focused on that.  I'll come back to this if libsvm hosting becomes an issue for an online deployment ...

Monday, April 30, 2012

Blogging better than local files for data management

So I'm trying to find my notes on this years courses, and having opened lots of other text files recently, my class notes have been pushed off the TextEdit 10 item history, and I can't find it in the usual folders I would expect. Spotlight search always go off and does a complete full text search in the wrong location, AND my whole computer's slowed down because I am trying to push all the 6Gb screen flow files off to my external hard drive, and actually what I am trying to do is go through my 60 chrome tabs and off load the stuff that I don't want to lose when I next have to reboot it to run Google Hangouts, so it seems ironically that blogging is easier for this kind of data management than files on my hard disk.  If only my file manager let me type in file names and I got autocomplete like I do in my browser.

Anyway here is a good resource I found on learning github:

http://learn.github.com/p/tagging.html

I also keep meaning to try out this approach of having git push to multiple repositories, so that a heroku push would also push to github:

http://stackoverflow.com/questions/165092/can-i-push-to-more-than-one-repository-in-a-single-command-in-git

I'm trying to pull all my github related resources together as I'm going to make github use a required part of all my future online CS classes.  I have gotten a bit sick of email attachments and dropbox.

And a random note, I managed to fix my printer by using a damp cloth:

http://h30434.www3.hp.com/t5/Other-Printing-Questions/Officejet-Pro-8500-quot-out-of-paper-quot-error-message/m-p/186605#M11353

Monday, April 23, 2012

iOS Storyboards tying me in knots

So I was wondering whether would it make sense to have a reciprocal segue when navigating down a hierarchy of categories of unknown depth in ios5?  I was thinking that the iOS navigation controller should provide the back button stuff etc., but it seems like if I was setting this up in a storyboard then I need to keep on seguing over and over again to the same controller displaying the current list of categories ...

After some help from iphonedev on irc.freenode.net I wasn't able to create a segue from one viewcontroller to itself, but I was able to make a seque to a clone of that view controller (Option Key drag to copy out of the left hand list view of items, thanks Psy) and I got the convoluted hook up pictured here.  Seems remarkably unDRY, but it works and I can now navigate up and down the category hierarchy with the navigation controller handling the back button to get back up again as necessary.

Friday, April 13, 2012

Eclipse Android Development Toolkit (ADT)

So I'm grading like 30 android phone and tablet projects for my mobile programming class, and there are things I keep encountering through the Eclipse Android Development Toolkit (ADT). Some good, some bad.

In particular there's this issue of the various android emulators and devices getting detached from the ADT, which means you can't deploy.  You then have to do this convoluted thing of going to the DDMS perspective in Eclipse, and clicking on the extra commands arrow in the device window, and then selecting "reset adb" and then gettting this confusing error message "Adb failed to restart! Make sure the plugin in properly configured", which you can't copy and paste and you have to ignore and then your devices/emulators connect back up.

This happens pretty regularly for me on OSX, and has been happening since at least Android 1.5 I think.  There are a few google hits on this, but not much, and this is likely in part that the error message can't be copied and pasted, and most experienced coders simply click "OK" and move on, but it's very confusing for newbies.

You'd have thought that a big company like Google could get this fixed up.  Maybe no one cares enough - maybe it's only happening for a few of us.  Maybe I'd have to bring it up at Google I/O or something to get it fixed.

In general though any other kind of error that can be copied and pasted is usually easily solved via StackOverflow, e.g.  when the ADT tells me that I can't install an app due to an existing one with the same signature and I have to type "adb uninstall whatever" in a shell and then I get this from the shell:


unknownc8bcc8da38b2:platform-tools samueljoseph$ ./adb uninstall edu.hpu.csci4702
- waiting for device -
error: more than one device and emulator

So I google the error message in quotes and I go to the stackoverflow hit since that's usually the best way to make progress:


and I see how to uninstall from a particular device.  So that all kinds of work, but I wonder why these IDEs couldn't be a little more user-friendly - I have similar issues with XCode.  With both XCode and Eclipse ADT I feel like a lot of disparate tools have been bundled together and are not quite working properly.  Maybe that's just the nature of things ...


Monday, April 2, 2012

Customizing Google Sites to Individual Users

Ever since Google Groups discontinued support for pages and files I've been running my classes on Google Sites, at least for hosting content.

The sign in link for individual users in Google sites is very carefully hidden at the bottom of the screen making it seem like the emphasis is on content hosting for all rather than on customizing content for individual users, but that is exactly what I'd like to do in order to, for example, show students in my classes which assignments they've completed etc.

It seems like there should be some way to do this, but my searches so far have proved inconclusive, as I keep searching and not quite coming up with anything concrete.  It seems like Google App Scripts should help, but I'll have to look again once the semester has finished:

https://developers.google.com/apps-script/articles

XCode StoryBoarding and SplitViews

So I burnt a lot of time last week trying to get XCode Storyboards working nicely with UISPlitViews on the iPad.  One of the more helpful resources was this one:

http://www.techotopia.com/index.php/Using_Xcode_Storyboarding_(iPhone_iOS_5)

which I found by searching Google for "programmatically trigger segue".  My previous search for "splitviewcontroller segue" was less helpful, turning up short, not really relevant StackOverflow posts.

I also skimmed the following tutorial, but wasn't able to pull the information I wanted fast enough:

http://www.raywenderlich.com/5191/beginning-storyboards-in-ios-5-part-2

Ultimately I got it all working, and I think the biggest roadblock was the idea that the segues still needed to be created in the StoryBoard interface, but that I would trigger them and catch them programmatically.  Given that I wanted different segues to take place depending on whether the user clicked a table cell in the master view, or the associated detail disclosure button.  The trick was to have segues in the Storyboard running from the master view to two other UI views, one for each of the possible segues, but then adjusting the segue type to specify the detail view to ensure that the right part of the splitviewcontroller was updated.

Particularly after some not too disimilar trials and tribulations with Android fragments it makes me wonder if there isn't some way to make mobile development a little less hazardous ...

Of course having got that working there was still the unpleasantly convoluted issue of trying to make the test in a UITableViewCell wordwrap to keep me entertained:

http://stackoverflow.com/questions/2906090/how-to-make-text-in-a-uitableviewcell-wordwrap
http://stackoverflow.com/questions/494562/setting-custom-uitableviewcells-height
http://the-lost-beauty.blogspot.co.uk/2009/11/multi-line-uitableviewcell-using.html

I kind of hacked a solution there.  It seems remarkable that there isn't a dynamic sizing solution that doesn't require hacking at code - maybe I just haven't found it yet.

I think I was also also tripped up by the detailTextLabel not being visible:

http://stackoverflow.com/questions/5190648/why-is-detailtextlabel-not-visible

All my attempts to set a nice background gradient for some elements in the table view failed :-( The top three StackOverflow hits for my google search on "CAGradientLayer UILabel" seemed promising, but I couldn't get it to work.  I did however manage to get something to appear by creating a dummy transparent PNG of the right size and then was able to dynamically create a UILabel with a number on it.


Android ListViews in Tablet Fragments

Putting together a ListView demo for my mobile programming class turned out to be a bit of a pain. I guess I set myself for difficulty by trying to replicate Paul Hegarty's demo of adding a favourites list to a graphing tablet calculator, but I thought I had sufficiently reduced the difficulty level by not trying to replicate the iPad popover, and instead went for a simple ListView embedded in the main Calculator view.

I fell foul of a number of issues, several of which I have encountered before and one might have hoped I wouldn't have been tripped by them again.  They include

1) need for embedded ListViews to have special IDs
2) nature of ArrayAdapters
3) variations arising from fragments

One of the early errors that slowed me down and forced me to create a separate FavesFragment to handle the list in isolation was the "inflating fragment error"  The following StackOverflow posts helped orient me ...

http://stackoverflow.com/questions/7162403/binary-xml-file-line-9-error-inflating-class-fragment
http://stackoverflow.com/questions/6500408/add-static-header-to-listfragment-in-oncreateview

I spent a lot of time thinking that an AsyncTask was going to be necessary in order to enable the listview data to be updated (e.g. add more rows), but that turned out not to be the case.  The key trick there was to do any addition to the data through ArrayAdapter.add, and basically forget about any reference to the initial data passed to ArrayAdapter - ultimately I just initialized with an empty ArrayList.  There were several confusing StackOverflow posts that deflected me from ultimately finding salvation in the android irc channel on freenode.net

http://stackoverflow.com/questions/6799121/listview-freaks-out-when-i-try-to-update-the-arrayadapter-is-the-header
http://stackoverflow.com/questions/3669325/notifydatasetchanged-example
http://stackoverflow.com/questions/9206275/how-to-create-multiple-listviews-with-fragmentsif-possible-with-listfragment

Well this second one should have helped me more, but I was trying to do much at once, as well as programming Pythonically in Java, which often gets me into trouble ...

And I think this android developer blog got me a bit further:

http://android-developers.blogspot.co.uk/2011/02/android-30-fragments-api.html

but annoyingly it was like much of the android docs; code fragments, but not an actual working system.  Ultimately it was IRC that sorted me out, with other developers pointing out that the key issue was updating the data through the ArrayAdapter itself.

I also burnt time seeing if I could get some simple persistence going with Preferences, but it seemed that unlike in iOS, I couldn't store any more than a simple set of Strings in light weight preferences, which wasn't quite sufficient for the stack object I was working with, or maybe it would have been okay?  Hmm ....

Language Models and Python Reflexes

So I burnt slightly more time than necessary on the NLP homework writing my own bigram/unigram calculating code in order to calculate plus one smoothing values on a small corpus.   However it was interesting Python background in terms of the much tripped over integer division issue in Python:

http://stackoverflow.com/questions/1267869/how-can-i-force-division-to-be-floating-point-in-python
http://stackoverflow.com/questions/1282945/python-integer-division-yields-float

As well as an interesting review of the multitude of quick ways to get an item count from a list in Python:

http://stackoverflow.com/questions/2401885/how-to-get-item-count-from-list-in-python
http://stackoverflow.com/questions/3496518/python-using-a-dictionary-to-count-the-items-in-a-list

and my old favorite how to grab a list pairwise in Python:

http://stackoverflow.com/questions/1257413/iterate-over-pairs-in-a-list-circular-fashion-in-python

iPhone/iPad demos for online class

So I was impressed to see Paul Hegarty demo-ing iPad apps in the Stanford iOS class using an iOS dock to hdmi cable adapter, and there seem to be many of these available.  However it doesn't quite meet my needs for an online class.  I'd like to see my iPad or iPhone screen appearing in a window on my desktop computer so that I can run a demo and use Screenflow to record the activity on the phone/pad along with slides, XCode etc.

There seems to be some talk of supporting this via VNC or similar, but a superficial look suggests that all these are for viewing your desktop computer on your iPad etc., and not quite what I am looking for.

http://forums.macrumors.com/showthread.php?t=912536
https://secure.logmein.com/products/ios/

Friday, March 30, 2012

Not so much Ice Cream Sandwich just yet

So I go to my Acer tablet and find it's on Android 3.0 ugh - need to upgrade, need to get it on Wifi, so I need the mac address (googled "acer android mac address"), of course then my BTInternet box wanted my password to adjust the mac filtering, so I had to start up FF cos most of my password are stored back there, but FF bootup takes a while.  In the meantime I get an annoying text from my cell provider "3", which I can't shut off without logging in to their website, for which I don't remember the password and their password reminder requires that I give them the last 6 elements of the SIM, and I don't have a paperclip to push the SIM out of the iPhone since I used my spare paperclip that I usually carry in my wallet to fix my 3 year old's crocs the other day, argh!

Found staple, that failed, must look for paperclip - in the meantime got system update started for Acer.  Got paperclip, got SIM out, got password sent to mobile device, unable to find anything about opting out in 3 website ... hmm, hmm.  Seems like others have had trouble opting out.  I found what seems like the right place to do it for 3:

https://www.three.co.uk/My3Account/Pay_Monthly/Marketing_preferences

and I'd post back to those previous two forums, but both require me to sign in ... ah we need a better overall solutions for passwords/security/spam etc. ...

Got to wait till Acer is on 30% battery before system update, but no USB charging argh!  Ah but charger can be plugging in separately.  And there should be an ice cream sandwich update.  Google searches for acer development settings didn't get me anywhere, but found them under settings -> applications -> development, so now eclipse can see tablet, but can't deploy 4.0 because charge is still only on 17%. 

Okay, got charge.  First update, not 4.0 - starting second update.  Feels like I have to wake the thing up to get the downloading to continue, okay so now I am on 3.1 and downloading another update ... switched to "stay awake" keeping the screen on whilst charging setting.  Now I'm on 3.2 and no more updates, boo!  Looks like it was only the A200 that's got the 4.0 update, not the A500, which is slated for April, so no 4.0 tablet demos for the moment :-(

Thursday, March 29, 2012

BDD/TDD for Mobile (Android/iOS) Part I: Roboelectric Install

So inspired by the Coursera SaaS class, I have been bashing my head against TDD/BDD for mobile again. Let's document some of my findings.

 Robotium and Roboelectric are black box testing options for Android. I got stuck on Robotelectric needing apache ant >= 1.8 and I just re-came across my install attempts in my browser tabs. So I just got that all set up.

 I did notice that I seemed to have all sorts of ant paraphenalia around. I do of course remember ant reasonably fondly from my intense Java programming days. Certainly with more fondness than Maven.

Anyway, so at the risk of exposing my shaky UNIX sys admin knowledge I found that I had ant installed in a couple of places on my OSX machine, specifically:

/Users/samueljoseph/Code/java_libs/apache-ant-1.7.0
 /usr/share/java/ant-1.7.1
 /usr/share/java/ant-1.8.1
 /usr/share/java/ant-1.8.2

so it turned out that I already had the ant I needed on my OSX, and I had not necessarily needed to pull down ant-1.8.3, but I had done that anyway; but having down that I threw that in there as well

 /usr/share/java/ant-1.8.3

hastily adjusting the groups and owners in what looked like me a neat fashion:


 sudo chown -R root /usr/share/java/ant-1.8.3
 sudo chgrp -R wheel /usr/share/java/ant-1.8.3

although to be honest I am not sure if that will have turned out to be necessary, or even done correctly, and in fact it turned out that my ant from the command line was already pointing to 1.8.2


/usr/bin/ant
unknownc8bcc8da38b2:RobolectricSample samueljoseph$ ls -ls /usr/bin/ant
8 lrwxr-xr-x  1 root  wheel  22 Jan 25 11:36 /usr/bin/ant -> /usr/share/ant/bin/ant
unknownc8bcc8da38b2:RobolectricSample samueljoseph$ ls -la /usr/share/ant
lrwxr-xr-x  1 root  wheel  14 Jan 25 11:36 /usr/share/ant -> java/ant-1.8.2

through a series of symbolic links, and it appears that an ANT_HOME variable set in my .profile file was overriding that.

unknownc8bcc8da38b2:RobolectricSample samueljoseph$ more ~/.profile 
export PATH=/opt/local/bin:/opt/local/sbin:$PATH
export DISPLAY=:0.0
export EDITOR=/usr/bin/nano
export ANT_HOME=/Users/samueljoseph/Code/java_libs/apache-ant-1.7.0
export JAXB_HOME=/Users/samueljoseph/Code/java_libs/jaxb-ri-20070122
export JUNIT_HOME=/Users/samueljoseph/Code/java_libs/junit4.4
export AVETANA_HOME=/Users/samueljoseph/Code/java_libs/avetana-jsr-82/
export BLUECOVE_HOME=/Users/samueljoseph/Code/java_libs/bluecove/
export JAVABT_HOME=$AVETANA_HOME
PROMPT_COMMAND='echo -ne "\033]0; ${PWD/$HOME/~}: ${USER}@${HOSTNAME}\007"'

and removing that ANT_HOME variable there and (I think) running source ~/.profile allowed RoboElectric to run :

Buildfile: /Users/samueljoseph/Code/RobolectricSample/build.xml

clean:

-pre-clean:

clean:

-setup:
     [echo] Gathering info for RoblectricSample...
    [setup] Android SDK Tools Revision 16
    [setup] Project Target: Google APIs
    [setup] Vendor: Google Inc.
    [setup] Platform Version: 2.3.3
    [setup] API level: 10


Although to be honest, I now forget what RoboElectric is actually supposed to be doing.  I had started my poking around with finding a deleted blog article about installing ant on OSX


http://webcache.googleusercontent.com/search?q=cache:X40LT6LETnIJ:gauravstomar.blogspot.com/2011/09/installing-or-upgrading-ant-in-mac-osx.html+&cd=6&hl=en&ct=clnk&gl=uk

Perhaps deservedly deleted due to a number of technical errors.  Either way, between that article and the various symbolic links I discovered I was left no clearer on whether there was some systematic method to manage ant on OSX, and I just went with the one that had greater mass, i.e. the two or three versions of ant in it.  At least on Debian I used to be pretty sure that apt-get was the way to grab software and that on windows all bets were off, and I used to be trying to use ports consistently on OSX, but then that failed when I upgraded the OS, and then it started working again, and now I am just not sure what I am supposed to be doing ...

So working through deleting more tabs, I find that I found some slides that covered some of the different testing options:

http://www.slideshare.net/atobulreddy/android-automation-testing-by-atobulreddy

which reminded me about the Android instrumentation framework, which I do have running.  Actually it seemed I could just run vanilla JUnit tests on peripheral android classes (i.e. ones without android specific functionality) in eclipse on individual classes within my android projects, and when I added a few Android Instrumentation ones (in a separate Android Testing Project, as per instructions) that the existing vanilla JUnit tests also got run anyway, but the Android instrumentation framework, which I think had been painfully slow before, seemed not too bad, particularly if I ran the tests on my externally connected Galaxy Nexus.

There was a blog article that I was following (which I just found yay!) from 2009 talking about the pain of android testing, but perhaps with faster and faster android phones, we can cut out the slow emulator and make android testing less arduous; although I get the sense this Android Instrumentation framework may have been released since that article, or at least the article is trying other approaches that don't work so well.

Of course all this reminds me how frustrating it was that my android talk at the testers workshop last summer was at the same time as a real android expert was talking about android testing, and his talk wasn't recorded (whereas mine was), so anyway ... but back to the slides, which mention Monkey, Robotium and Roboelectric.  I've yet to try Monkey, although there's mention of python so that looks interesting.

So anyhow, it would appear that I downloaded the RoboElectric sample.  And it looks like I was following their github sample project instructions when I got distracted by other work. So "ant debug" now generates an app to be debugged, but I can't run adb till I have an emulator running so now I wait while first eclipse and then the emulator startup, and I reflect how it would be great if I properly documented all of my hacking processes like these, particularly because in the two days prior to hacking on Android BDD/TDD frameworks I had done the same for iOS, and got the OCUnit tests working for iPhone up to a similar level of functionality as I subsequently got the vanilla unit tests working for Android, and found that OCUnit and the Android Instrumentation Framework pretty much paralleled each other in allowing one to simulate button presses in code calls to specific Activities (Android) or Controllers (iOS).

And then in the two days following my attempts to look at Robotium and Roboelectric which interestingly parallel at least some of the capabilities of the OSX Instruments "Automation" framework which I had also been looking at, but as I was saying, in the following two days I had been sucked into iPad development and burned many hours solving little iPad issues, particularly relating to SplitViewControllers, thus the generation of countless tabs of links to places like stackoverflow, burying my tabs on RoboElectric and Ant installation procedures.

BTW, the android emulator is still booting as I write this and it occurs to me I could have just plugged in the Galaxy Nexus ... so I do that, and reflect that it would be nice to be writing out my programming/hacking thoughts and including links to all searches and finds relevant and otherwise, and I speculate about some kind of semi-automated process, but conclude that really I should just keep a blog window open at all times for documentation purposes ...

So the adb doesn't seem to detect the connected Galaxy Nexus, and the emulator is STILL booting and I start to try and close more tabs and find a few other interesting notes to record, such as the android equivalent of iOS popover tabs:

http://stackoverflow.com/questions/5663912/android-popover-controller-same-as-ipad

which don't immediately seem to support the full flexibility of iOS popovers.

But now the emulator is running and I get this error:


unknownc8bcc8da38b2:RobolectricSample samueljoseph$ ./../android-sdk-mac_86/platform-tools/adb -e install bin/RoblectricSample-debug.apk
error: device not found

Googling the error message takes me here:

http://stackoverflow.com/questions/5405562/adb-errordevice-not-found

which reminds me of that command to get the list of devices

unknownc8bcc8da38b2:RobolectricSample samueljoseph$ ./../android-sdk-mac_86/platform-tools/adb devices
List of devices attached
01498B1A16010012 device

which reminds me that RoboElectric won't work with real devices, so I unplug the Nexus, and my emulator is still not in the list:


unknownc8bcc8da38b2:RobolectricSample samueljoseph$ ./../android-sdk-mac_86/platform-tools/adb devices
List of devices attached

unknownc8bcc8da38b2:RobolectricSample samueljoseph$



https://www.google.co.uk/search?q=device+not+showing+in+adb+devices
http://stackoverflow.com/questions/7502011/emulator-not-showing-in-adb-devices

makes me try the following:


unknownc8bcc8da38b2:RobolectricSample samueljoseph$ ./../android-sdk-mac_86/platform-tools/adb kill-server
unknownc8bcc8da38b2:RobolectricSample samueljoseph$ ./../android-sdk-mac_86/platform-tools/adb start-server
unknownc8bcc8da38b2:RobolectricSample samueljoseph$ ./../android-sdk-mac_86/platform-tools/adb devices
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
List of devices attached
emulator-5554 device

so we are good there, same issue as we often encounter on eclipse - such a pain - what is up with that Google?

So anyway, so then I get:


unknownc8bcc8da38b2:RobolectricSample samueljoseph$ ./../android-sdk-mac_86/platform-tools/adb -e install bin/RoblectricSample-debug.apk
708 KB/s (278190 bytes in 0.383s)
pkg: /data/local/tmp/RoblectricSample-debug.apk
Success

but no tests run.  I can run the app by selecting it, and get a few screens including a pivotal tracker log in, but am left wondering whether I just ran some tests or whether using ant instead of maven broke all this.  I'll add a screenshot and leave it there for the moment.  Looks like I have missed something, or perhaps I just have to write up some of my own tests ...