Tuesday, June 28, 2016

Critical Mass of a Community

The holy grail of the Agile Ventures community, and perhaps any community, is to achieve "Critical Mass".  That's the point at which the community becomes self-sustaining and activity proceeds without relying on any one particular individual to keep it going.  "Critical Mass" is a term from physics which describes the threshold weight of nuclear material required to create a nuclear explosion.

In nuclear material it's the movement of particles called "neutrons" that cause individual atoms (in particular the atomic nuclei) to split apart, or undergo what's called Nuclear Fission. What makes a nuclear explosion possible is that this process of fission releases additional neutrons, which can go on and cause other atoms to split apart.  If you have a large enough amount of the right material it's almost inevitable that each neutron generated will collide with another atom as it travels through the material, which generates more neutrons which collide with other atoms and so on.  This is called a chain reaction.  Have too little material and the neutrons will be leave the material without having hit other atoms, and the chain reaction dies out.

Let's explore the analogy with a community, in particular a pair programming community.  Each pairing session could be considered an atom.  Assuming the you have one pairing session take place (and it goes reasonably well), you'll end up with two people who are interested in pairing again.  They'll then be searching for other pairing sessions, but if there are none available, or none that they happen to be interested in (wrong programming language or platform) then it's likely these two will drift off and perhaps not try to pair in the same community again.  However if these two do find other pairing sessions, you can see how the single successful pairing event can lead to two more.  Assuming those sessions go well, you have four people now looking to pair and so on.

Under the right conditions you can get a chain reaction.  It requires a critical mass of people taking part in pairing sessions.  Ideally whenever anyone wants to find a pair, there is always someone there ready to go.  Of course all this depends on people being able to find and join pairing sessions and also for them to go well.

Too few people and there's just not that many opportunities for pairing; but lots of people is not enough.  Imagine that lots of people are trying to pair but that problems with the interface mean that people trying to join a pairing session end up in the wrong location.  No pair partner, no pairing.  Michael and I uncovered one problem with the AgileVentures interface last week.  Hangouts that had people in them were being reported as "not live" after 4 minutes.  This meant that on a fair number of occasions people attempting to join a hangout for pairing or for a meeting would find themselves on their own in a separate hangout.

We've just rolled out a fix for this and hopefully this will be another step towards achieving critical mass in the community.  It's unlikely to be the only step required as having a good pairing experience is more complex than nuclear fission.  We also want to adjust our user experience to maximise the chances of a good pairing experience for everyone.  It's not clear the best way to do that but clearly getting two people into the same hangout at the same time is an important pre-requisite.  Things that we're exploring include adding automated "pair rotation" timers to the hangout itself; having users rate their pairing experience; reflecting pairing activity through user profiles and so on.

We need to carefully monitor the changes and fixes we just made to see how the proportion of pairing event participation changes, and continue our Agile iterative process of making small changes and reflecting on their effect.  Making it more obvious which events are live might lead to more disruption in pairing events, or it might make observing ongoing pairing events easier, and that might make people more or less inclined to initiate their own pairing events.  It's not simple, but with careful measurement hopefully we can find that sequence of changes to the user experience that will lead to critical mass!

Friday, June 24, 2016

Analyzing Live Pair Programming Data

The recent focus for the AgileVentures (WebSiteOne) development team is trying to make the process of joining an online pair programming as smooth as possible.  The motivation is two-fold; One, we want to have our users to have a good experience of pairing, and all the learning benefits it brings; Two, we want to have large numbers of users pairing so that we can get data from their activity to analyse.  The latter motivation sort of feeds the first one really, since the point of analysing the data is to discover how we can serve the users better, but anyhow ... :-)

Several months back we created an epic on our waffle board that charted the flow from first encountering our site to taking part in a pairing session.  We identified the following key components:
  1. Signing Up
  2. Browsing existing pairing events
  3. Creating pairing events
  4. Taking the pairing event live
  5. Event notifications
  6. Event details (or show) page
  7. Editing existing events
The sequence is only approximate as signing up/in is only required if you want to create an event, and not required for you to browse and join events.  The important thing was that there were various minor bugs blocking each of the components.  We set about trying to smooth the user experience for each of the components, including sorting out GitHub and G+ signup/signin issues, providing filtering of events by project, setting appropriate defaults for event creation and ironing out bugs from event edit and update, not to mention delivering support for displaying times in the users timezone, and automatically setting the correct timezone based on the user's browser settings.

There are still other points that could be smoothed out, but we've done a good portion of the epic. The question that partly troubles me now is how to "put it to bed".   A new epic that contains only the remaining issues is probably the way to go, but finally we've got to the point to start analysing some data, since we've got the notifications for the edX MOOC pairing activity flowing to the MOOC Gitter chat fairly reliably and we've just broken through on removing key confusions about joining an event, and working out some problems about the event displaying whether it is live.

This last element is worth looking at in a little more detail as it strongly affects the type of data we are gathering.  Creating (and tracking) Google Hangouts for pairing from the AgileVentures site involves creating a Google Hangout that has a particular plugin, called HangoutConnection, that knows the server side event it is associated with.  This was originally designed by Yaro Apletov and is written in CoffeeScript.  It gets loaded when the hangout starts and attempts a connection back to the main AgileVentures site.  Given successful contact an EventInstance object is created in association with the event.  This EventInstance includes information about the hangout such as the URL, so that other people browsing the site can also join the hangout without being specifically invited.  The HangoutConnection continues to ping the site every two minutes assuming the hangout is live, the plugin hasn't crashed and so on.

What Michael and I identified on Wednesday was that only the first of these pings actually maintained the live status, making it look like all our pairing hangouts were going offline after about 4 minutes.  This had been evidenced by the "live now" display disappearing from events somewhat sooner than appropriate.  This might seem obvious, but the focus has been on fixing many other pressing issues and usability concerns from the rest of the epic.  Now that they are largely completed this particular problem has become much clearer (also it was obscured for the more regular scrums which use a different mechanism for indicating live status).  One might ask why our acceptance tests weren't catching this issue.  The problem here was that the acceptance tests were not simulating the hit of the HangoutConnection to our site.  They were manipulating the database directly, thus as is often the case, the place where the bug occurs is just in that bit that wasn't covered by a test.  Adjusting the tests to expose the problem made the fix was relatively straightforward.

This is an important usability fix that will hopefully create better awareness that hangouts are live (with people present in them), and increase the chances of people finding each other for pairing.  There's a lot more work to do however, because at the moment the data about hangout participants that is sent back from HangoutConnection gets overwritten at each ping.  The Hangout data being sent back from HangoutConnection looks like this:

    "0" => {
                   "id" => "hangout2750757B_ephemeral.id.google.com^a85dcb4670",
        "hasMicrophone" => "true",
            "hasCamera" => "true",
        "hasAppEnabled" => "true",
        "isBroadcaster" => "true",
        "isInBroadcast" => "true",
         "displayIndex" => "0",
               "person" => {
                     "id" => "123456",
            "displayName" => "Alejandro Babio",
                  "image" => {
                "url" => "https://lh4.googleusercontent.com/-p4ahDFi9my0/AAAAAAAAAAI/AAAAAAAAAAA/n-WK7pTcJa0/s96-c/photo.jpg"
                     "na" => "false"
               "locale" => "en",
                   "na" => "false"

Basically the current EventInstance will only store a snapshot of who was present in the hangout the last time the HangoutConnection pinged back; and data from pings after the first two minute update has been being discarded.  We're about to fix that, but here's the kind of data we can now see about participation in hangouts:

#participants #hangouts
1             *
1             *
1             ****
2             *
3             **
1             ****
1             *
2             *
3             *
1             *
1             ***************
2             *
3             *
1             ******************************
2             ****
4             **

The above is just a snapshot that corresponds to the MOOC getting started; we're working on a better visualisation for the larger data set.  We can see a clear spike in the number of hangouts being started, and a gradually increase in the number of hangouts with more than one participant, remembering that the participant data is purely based on who was present at two minutes into the hangout.

If the above data was reliable we might be saying, wow we have a lot of people starting hangouts and not getting a pair partner.  That might be the case, but it would be foolish to intervene on that basis using inaccurate data.  Following the MOOC chat room I noticed some students at the beginning of the course mentioning finding hangouts empty, but the mood music seems to have moved towards people saying they are finding partners; and this is against the backdrop of all the usability fixes we've pushed out.

To grab more accurate participation data we would need to do one or more of the following:
  1. adjust the EventInstance data model so that it had many participants, and store every participant that gets sent back from the HangoutConnection
  2. store the full data sent back from every HangoutConnection ping
  3. switch the HangoutConnection to ping on participant joining and leaving hangouts rather than just every two minutes
  4. ruthlessly investigate crashes of the HangoutConnection
With reliable data about participation in pairing hangouts we should be able to assess some objective impact of our usability fixes as they roll out.  We might find that there are still lots of hangouts with only one participant, in which case we'll need to investigate why, and possibly improve awareness of live status and further smooth the joining process.  We might find that actually the majority of hangouts have multiple participants, and then we could switch focus to a more detailed analysis of how long participants spend in hangouts, getting feedback from pair session participants about their experience, and moving to display pairing activities on user profiles to reward them for diligent pairing activities and encourage repeat pairing activities.

Personally I find this all intensely fascinating to the exclusion of almost everything else.  There's a real chance here to use the data to help adjust the usability of the system to deliver more value and more positive learning experiences.

Monday, June 20, 2016

Moving Beyond Toy Problems

What is life for?  To follow your passion. What is my passion? I find myself frustrated with the closed source, closed development nature of many professional projects; and on the flip side equally frustrated with the trivial nature of academic "toy" problems designed for "learning".

I love the "in principle" openness of the academic sphere and the "reality bites" of real professional projects.  I say "in principle" about academic openness, because while the results of many experiments are made freely available, sharing the source code and data (let alone allowing openness as to the process) is often an afterthought if it is there at all.  The MOOC revolution has exposed the contents of many university courses which is a fantastic step forward, but the contents themselves are often removed from the reality of professional projects, being "toys" created for the purpose of learning.

Toy problems for learning makes sense if we assume that learners will be intimidated or overwhelmed by the complexity of a real project.  Some learners might be ready to dive in, but others may prefer to take it slow and step by step.  That's great - I just don't personally want to be spending my time devising toy problems, or at least not the majority of my time.  Also it seems to me that the real learning is the repeated compromises that one has to make in order to get a professional project out the door; balancing the desire for clarity, maintainability, readability and craftsmanship against getting features delivered and actually having an impact that someone cares about.

Professional projects are typically closed source, closed development; although there are more and more open source projects in the professional sphere; the basic idea seems to be: we are doing something special and valuable, and we don't want you to see our secret sauce, or the mistakes we are making along the way.  Thus it might be considered anti-competitive for a company to reveal too much about the process it uses to develop its software products.  That said,  companies like ThoughtBot publish their playbook, giving us an insight into their process and perhaps increasing our trust that their process is a good one.  Even so we don't get to see the "actual" process, and so that's not ideal for others trying to learn, but then most companies are not trying to support the learning process for those outside.

Personally I want to have a global discussion that everyone can take part in, if they want to.  I want an informed debate about the process of developing software where we have real examples from projects - real processes - where we can all look at what actually happened rather than relying on people's subjective summaries.

Maybe this is just impossible, and an attempt at the pure "open development" process of AgileVentures is destined to fail because by exposing exactly how we do everything we can't build up value to sustain our project?  That's what professional companies do right?  They have a hidden process, focus attention on the positive results of that process and then increase the perception that they have something worth paying for.  To the extent that they are successful they are building up reputation that will sustain them with paying customers, because those customers are inclined to believe the chance is good they'll get value for money.

If the customer had total transparent access to every part of what goes on, they could just replicate it themselves right? :-) Or a competitor could provide the same service for less?  However there's a strength in openness - it shows that you believe in yourself and you demonstrate that you've followed the hard path through the school of knocks and maybe you are the right people to be able to adapt fast to the next change, even if others could copy aspects of what you do.

Everyone should have the opportunity to learn and boost their skills by taking part in such an open process. The shy might not want to participate directly, but they can learn by observing a real process that they won't have till they can actually start in a job.  It's the old catch-22 "no job, no experience; no experience, no job".

This is what I stand for, what AgileVentures stands for.  An open process beyond open source, we call it "Open Development".  Meetings with clients, planning meetings, coding sessions, everything is open and available to all.  Where customers have confidential data, that is hidden, but otherwise we are as open as possible.  Of course that generates too much material for anyone to absorb, and we need help curating it, but the most amazing learning happens when new folk engage and take part in the process - thinking about the code in their pull requests in relation to the needs of a non-technical client being articulated by a project manager who cares about their learning, but also about the success of the project.  Come get involved, be brave, be open and get that experience that you can't get without a job, or that you can't get in your current job.

Friday, June 13, 2014

Thoughts about pair roulette, pairing in MOOC

One pedagogical argument is that pairing should be restricted to just pairs.  By having only two people in the pairing session we increase the likelihood that both are doing active work.  Others may benefit from observing, but they are also reducing the pool of other possible pair partners, and in the case where students are pairing on homeworks that are being assessed, there might be an argument to suggest that observers are freeloading.

However, taking an active part in a pair is very intimidating for many and so it's a big advantage in some ways if random pairing sessions are visible for observation and/or some degree of interaction with a wider group of learners.

There is a tension between the desire for each individual learner to operate in the manner that feels most comfortable at any given time (working solo, working actively in a pair, observing a pair etc.) and the desire to assess learner's abilities.  If we were only concerned with promoting learning, then we would not necessarily impose restrictions on observers, although there is the further case of where learners would like to pair, but would like to restrict who is observing for privacy reasons.  Privacy reasons are unclear to the current author, but seem to revolve around the fear of aspects of one's personality being displayed for others to judge?  Online remote pair programming does not require that people display video of themselves, or even necessarily any audio, but still one's typing is exposed in real time, and one's decisions about what to type next, what code to create, are being exposed, in the much the same way that they might be in an oral exam, and many people are intimidated by interviews and oral exams.  Superficially this is related to a fear that one will be judged as not having made the grade, but the author would love to hear insights from others on this.

Ideally learning situations such as remote pairing should not revolve around judgement of each others abilities, and provide a supportative environment for learning, but naturally all learners will be making judgements of each other (and themselves), such as relating to their partners language ability, coding ability, and so forth.  Some learners are in a hurry and may feel that they don't have time to be pairing with someone they consider as being less skilled in areas they want to improve in.

The author would suggest that the idea of placing learners on a linear spectrum of ability does not make much sense.  Everyone's understanding and skills are a complex multidimensional entity.  For example one individual may be very confident in Ruby String manipulation, but much less confident as regards OOP and another learner's understanding/confident may be exactly the opposite.  As regards coding one's number of years programming is often considered a guide towards "ability level" but still there is no clear linear relationship, and learners might well be advised to be patient and discover what they can learn from every possible pair partner.

Having all pairing sessions recorded is arguably a good move since it allows analysis of the pairing, ensure that any disputes about behaviour can be resolved with relation to a video of what actually happened.  Knowledge that one is being recorded (if not directly observed) serves as an incentive to behave more congenially.  The downside of recording is that it will increase the nervousness of some, and might prevent them from participating in the pairing session as actively as they would like.

Assuming that one can host pairing sessions and drop 2 or more people into them, and record them; the question arises about the best mechanism of pairing people up.  One of the key issues seems to be people's shifting schedules and being ready to pair.  If you add your name to a list of people wanting to pair, will you be available at the time someone else is ready?

If you have some mechanism for checking that someone is actually looking at a given page you could indeed have a list of people ready to go, although people may still be AFK (away from keyboard).  It's an interesting question about whether people should be asked to register their interests for a pairing session, e.g. Homework 2, Project X, or open to suggestions.  Should you have a single list of everyone ready for impromptu pairing, or should their be some set of lists divided up at some granularity - it seems like that depends on the numbers involved.  In the first instance perhaps one should just have a single list, until one demonstrates the need for more ...

There is also the question about whether one should have a list of ongoing sessions that others can browse and then join.  In the open source project case it seems the answer should be a resounding yes.  In the student case it seems we are caught between the desire to showcase the system running in progress (and allowing others to learn by observing), and the desire to afford students with additional privacy, and reduce perceptions of freeloading.

If one is offering a service for free then one can argue that participants have a degree of obligation to share what they are doing, i.e. contributing back, although of course the flip side is that if participants feel uncomfortable then they will just not participate.  It seems reasonable to suggest that a premium version of the system might allow privacy; since participants are contributing to the system by paying for it.

Finally there is the question of how to effectively scaffold pairing sessions with novice pair partners.  One can imagine step by step pairing walkthroughs, but it is currently unclear what the best mechanism would be to insert these into a pairing session?  Perhaps a cloud9/nitrous session where the computer would prompt the pair partners to switch roles and have certain sorts of discussions at certain times.  A human coach can scaffold that with some degree of skill - how much of it can we automate?

Monday, July 29, 2013

Sam Joseph on MOOCs for Hawaii Business Magazine

I was recently interviewed by Hawaii Business Magazine's Pavel Stankov on the subject of Massive Open Online Classes or MOOCs

Pavel: So first off, tell me about the public class that you taught online at HPU? What class was it? How long was it taught?

Sam: The class was the combined CSCI 4702 Mobile programming and MULT 4702 Mobile Design classes, and was taught over the usual 14 weeks of the HPU Spring semester this year.  It's a class that I've been teaching in one form or another at HPU and UHM for about 8 years, and focuses on the design and programming of applications for mobile devices such as tablets and smartphones.

Pavel: Who pitched the idea for that and how successful was the course? Did anybody drop out?

Sam: I designed the course myself and I pitched the idea that HPU should trial it as a public beta.  I would say the course was moderately successful :-)  I continue to get the positive feedback that I have gotten from students on this course as I have over the years.  All of the official HPU students enrolled in the course completed it, and the public format enabled some who had previously failed the course to
retake and successfully complete the course.  The majority of the non HPU students taking the course did not complete it, but I still got positive feedback from them.  I don't know that "dropping out" as a concept is particularly useful for students who are not studying for credit.  If for-credit students "drop out" it is clearly a negative event where a student has paid for support in their learning process, and for whatever reason, feels that they are not getting what they expect from a course.

When casual "MOOC" students are taking a course it is much more like they are receiving an encyclopedia.  They are gaining access to a set of materials that they can take and pick from as they please.  Since in the HPU public course trial we were not certifying their abilities I don't believe that their "drop out" rate of non-HPU students indicates the relative success or failure of the course.

Pavel: Would it be considered a MOOC, if it's not delivered through Coursera, edX, or Udacity? Was there a third party at all, or was it offered directly from HPU, just open to the public?

Sam: The first MOOCs were offered before Coursera, EdX or Udacity existed so I don't think that
who provides a course says anything about whether it should be considered a MOOC or not.
Whether we consider something a MOOC or not depends on four things, specifically it being
a course, it being available online, it being available to the public, and it being taken
by a large number of students.  Our online public course trial had around 20 students combined so I think it qualifies more as a SOOC (Small Open Online Course) than anything else :-)  I used the free and open Google Sites framework to host and deliver the course along with other free open source tools and Google App Scripts that I programmed myself.

Pavel: What class are you currently teach through edX? What is the turnout? What are the expectations and what do you hope to achieve through offering it? Have you offered a course of such a scale before? How do you feel about it?

Sam: In collaboration with UC Berkeley's Professors Dave Patterson and Armando Fox I am facilitating CS169X Software as a Service through EdX.  The current instance of the class
has over 13000 students enrolled and close to 250 teaching assistants.  The expectation
is that we can spread as widely as possible the concepts of software engineering craftsmanship.  My personal hopes are that by being closely involved in this course that my own HPU Software Engineering course will become even more valuable to the students taking it, in terms of the quality of the curricular materials they have access to, and the range of other learners they can interact with.

Sam: One of the key values of delivering a for-credit class publicly with a mix of for-credit and casual students is that the students get to mix with a much wider range of learners.  Students can take part in collaborative learning with people from all over the world, who in many cases bring fantastic industry experience with them to the class.  My involvement in this summer's EdX course is the first time that I have taken a major role in a class of this scale and I am extremely excited about it.  I see the combination of MOOC delivery systems such as EdX with personal scaffolded collaborative learning experiences such as pair programming and group projects revolutionizing the nature of the educational experience.

Pavel: What are the challenges for a MOOC instructor? What is the hardest part? The easiest part?

Sam: The challenges are having a continous flow of information with questions from students coming in constantly 24/7.  All materials have to be of exceptionally high quality. The hardest part is often just switching off for a moment to refresh yourself.  Delegation is the combined hardest and easiest part.  Given the 250 or so teaching assistants I have to control myself not to dive too deep on problems from individual students the moment they come up, leaving the teaching assistants to triage the challenges the students are facing.  The MOOC instructor must listen carefully to their teaching assistants balancing when to jump in with their expertise so as to benefit the maximum number of students possible.

Pavel:  If you were to start over, and change something in the way you approach your MOOCs, would you do so?

Sam: I am making an ongoing effort to change MOOCs so that they move away from the one size
fits all mode of learning, and focus on the individual.  I think real time interaction
with fellow MOOC students and open-ended projects of consequence to the individual
students is the key.  Certain Stanford Coursera classes have trail-blazed in this regard such as Scott Klemmer's HCI course, however I think we can go a lot lot further.

Pavel: How do you address the issues of academic honesty?

Sam: Academic honesty is an issue in a class that tries to offer credit for a largely one size fits all method of assessment.  My personal approach to academic honesty is to award credit for unique individual contributions.  To the extent possible within the contexts of HPU, Berkeley and EdX I make all my classes dependent on a students ability to offer unique individual project work.  In a class where projects are developed incrementally it quickly becomes clear if an individual is trying to attribute the work of others to themselves, even in the online context.   This is of course more complicated at large scales, but I believe we have the seeds in place to make academic honesty effectively a non-issue, and you are likely to see some very exciting developments in this regard over the next 24 months or so.

Pavel:  What is your take on the peer-grading approaches offered by some MOOC providers? Do you suppose there might be a better technology to control for plagiarizing?

Sam: I think peer-grading is an interesting approach.  I've used it in some of my classes, and I've taken MOOC classes in which it has been used.  I don't think it is yet quite in a form that delivers an ideal learning experience but it's an excellent start.  Since I teach programming and design, plagiarising is not quite the issue that is in other classes such as English and History, however it is still a concern.  I have also been taking a very interesting MOOC on cheating in online classes, and I think it's a mistake to focus on plagiarism and how to control it.  I believe the focus should be on igniting the imagination of individual students.  The key is for the instructor or teaching assistant to get to know the individual students personally, and ask them what it is they are really excited about doing.  The focus should be on providing a framework, scaffolding if you will, that enables the student to do something that they are excited about, giving them access to the tools that allow them to achieve their dreams.  I would argue that plagiarism and academic dishonesty comes largely from students not being interested in performing the academic exercises they are being set.  The solution is not to ask how to control plagiarism through technology, but what is it that interests an individual student?  Unlock the interest of the individual student and they will have no incentive to be academically dishonest; they will be truly motivated to create something of quality, and developing the necessary skills to support that.

Pavel: Do you think some classes are better suited to be taught through a MOOC than others? Which ones?

Sam: Given the form of the current MOOCs from providers such as Coursera, EdX and Udacity, I would say that computer science courses are particularly well suited to being taught through a MOOC.  One might argue that introductory courses are also well suited to MOOCs, however I am not sure I agree.  I think the being suited to a MOOC depends on who's perspective we are talking about, e.g. educator, student, institution etc.  However with the new technologies that are rolling out in terms of remote collaboration software such as Google hangouts, Multiway Skype screenshare etc. the majority of classes can be taught just as well in MOOC format, if not better than at a physical institution.  Although a clear exception would be those courses that are attempting to train or instruct in the use of expensive specialist equipment that is not available to the individual at their home.

Pavel:  Under what circumstances should colleges and universities award academic credit for third-party MOOC providers?

Sam: At the moment I think it's not in colleges and universities interest to award academic credit for third-party MOOC providers unless assessment has been verified through proctored examinations.

Pavel: How do you see the development of MOOCs in the observable future?

Sam: Difficult to say although my personal hope and plan is that MOOCs will become a lot more accessible and personal with MOOCs supporting students to talk in real time with their peers and instructors.

Pavel: Is there anything in particular about Hawaii that makes our location different when it comes to online higher education?

Sam: Hawaii of course has a large military presence; and military students really must have online education due to the nature of their work.  Of course Hawaii is also geographically isolated meaning that the range of educational choices is not what it is in other areas, making online education particularly valuable to Hawaii residents.

Pavel: How do you react to comments that MOOC are undermining higher education? For instance, some people are inclined to say that if awarded with academic credit, they would have no incentive to physically go to school when they can take everything online for free or a symbolic fee? Do you think this is a valid argument? Do you think this is a problem for faculty?

Sam: I guess the argument is that as MOOCs start awarding academic credit then many students might not attend a bricks and mortar "academy", thus undermining higher education institutions?  I think it's entirely possible that MOOCs may undermine higher education institutions that are not offering the highest quality of courses and value for money to their students.  I don't think that anything that itself promotes education can undermine education unless one posits that students receive some particular benefit from attending a physical institution that they cannot receive online.  Individuals have been taking distance and online courses around the world for many many years.  Here in the UK the Open University has been delivering higher education without physical institutions for some 30 odd years, and the UK still has a very strong set of higher education institutions in the physical domain.  I think it is an open question as to the value that students receive from attending a physical institution, and the most important thing in a free market is that of choice.  Students should have a free choice as to where they look for support to achieve their learning goals, and should not be paying over the odds for the support they receive.  I think the only challenge that MOOCs and other educational technology developments present for faculty is for those faculty who are not delivering the highest quality educational experience possible, and for those institutions who are charging over the odds.

Pavel: Finally, is there anything that you want to add, or a question that i should have asked you?

A general comment would be that globally I think we have education back to front.  Rather than asking what skills we should be teaching students and what subjects they should be learning, we should be asking our students what excites them, what they want to learn and what they want to build.

Friday, May 24, 2013

Scheduling Remote Pair Programming Sessions

So maybe it's just me, but it feels like there has been a recent explosion in interest in remote pair programming. Maybe it's being going on for ages, but I recently discovered great resources such as Joe Moore's http://remotepairprogramming.com/ and http://www.pairprogramwith.me/

I've now added the PairWithMe badge to all my projects on Github, and I follow the #pairwithme hashtag on twitter, which is exposing me to things like this remote pair programming meetup group:


@batarski and @eee_c were saying on twitter that the #pairwithme website should coordinate and maybe host the sessions.  I gave a lame tweet reply that that would be cool, and that some other sites were doing that, but didn't really have space to list the, so here they are:

And there was another one I signed up with recently, but I lost my Chrome history in a recent crash, so can't find it at the moment :-(

However although I've made contact with a few people through these (mainly just on twitter) I think I've only got a couple of remote pair programming sessions going from them.  Far more intensive has been the now 30+ and counting hours I've been remote pair programming on our EdX SaaS LocalSupport project which we are mainly co-ordinating through Skype chat rooms.  Here's a playlist of those videos

I'm looking forward to seeing more solutions to the challenge of remote pair programming scheduling.

Saturday, April 27, 2013

HCI Wireframing Assignment: Pair Programming Scheduler

These prototypes *very* loosely based on my earlier storyboards - basically I switched away from interventions during actual pairing, to think about scheduling pairing sessions.

Prototype 1 where the user indicates their availability first:

Prototype 2 where the user browsers who is available first. (more like the http://pair-with-me.herokuapp.com/ tweet aggregator)