Thursday, September 30, 2010

Learning ChatBots

Chapter 26 in "Parsing the Turing Test" was "Who Fools Whom? The Great Mystification, or Methodological Issues on Making Fools of Human Beings" by Eugene Demchenko and Vladimir Veselov, which had some great suggestions for bot techniques, and made a point about bot learning being hard.
'Teachable bots is an idea that looks logical at first glance, but becomes almost impossible to implement ... let us look at easy examples:

"John's birthday is January 1st, 1980, What is John's birthday?"

It seems so clear! But let's say this another way:

"Bob is 35. How old is Bob?"

Well, now he is 35 and, according to the educated bot, he will be 35 forever! Birthday is a constant date while "age" is a value that changes in time ... we find ourselves in a vicious circle: to be able to learn, a bot should know much more than it is possible to teach it "manually"'
The authors go on to talk about how Cyc might be reaching this critical threshold to actually start learning, but it seems to me that the problem is one of meta-learning. The authors seem to be focused on bots that learn facts of the type "X is Y", which can be queried through "what is X?" type questions. If this were the only type of learning available then there would be no alternative than to "manually" program the bot to know the difference between birthdays and ages. However could the bot not learn, as human children do, that one is constant and the other is variable? The concept I am wrangling at present is that idea that you could have a dialogue with the bot like so:
Sam: Bob is 35. How old is Bob?

Bot: Bob is 35

[A year later]

Sam: How old is Bob?

Bot: Bob is 35

Sam: No, he is 36

Bot: You previously said "Bob is 35".

Sam: Yes, he was 35, he is now 36 because a year has past.

Bot: So "Bob is 36", will that change again?

Sam: Yes, every year on his birthday his age will increase by 1.
Of course the obvious flaw with my idea is that the bot needs to be intelligent enough to understand my explanation, but the idea of a bot that can learn through explanation should be clear. I imagine the teaching new type of knowledge to the bot would be challenging, but am reminded of that paper (which I now can't find!) by Guy Steele which assumes knowledge of all single syllable words, and defines every word of more than one syllable in terms of words of one syllable or other polysyllabic words that have previously been defined.

Anyhow, this method of boostrapping bot knowledge needs a lot of work, and there's probably some critical reason why it just won't work period, but I'm still intrigued by the idea.

Tuesday, September 28, 2010

MegaHAL, Jason Hutchens and AI Ltd (NV?)

So I just read a very interesting chapter by Jason Hutchens in "Parsing the Turing Test", in which he talked a little about his work at AI Ltd (NV?) on mixing a reinforcement learning paradigm with some stochastic word frequency modeling to train an AI. I remember reading about that 10 years ago or so and was really interested to get an update. There was a tantilizing transcript of a HAL/Trainer dialogue where HAL (the computer) was effectively simulating a 15 month old's speech. A quick web search turned up that dialogue in an academic paper:

Treister-Goren, A., Hutchens, J.L.: Creating AI: A unique interplay between the development of learning algorithms and their education. In: Proceeding of the First International Workshop on Epigenetic Robotics (2001)

Trainer: where is daddy?
HAL: daddy gone
Trainer: daddy is at work
HAL: daddy work bye bye
Trainer: yes daddy has gone to work
HAL: daddy work bye bye bye bye
Trainer: do you want to go to the park?
HAL: yes bye bye
Trainer: ok we can go to the park
HAL: take car bye bye
Trainer: we will take the car because the park is far.
let’s pack some things in a bag
HAL: ball now park
Trainer: yes we should put the ball in the bag
HAL: ball now park mommy
Trainer: wait hal, we need to take something to eat
HAL: bananastory? night now
Trainer: we need to take something to eat
HAL: bananas
Trainer: ok we can take some bananas
HAL: monkeys eat bananas
Trainer: you’re so smart hal. do you want to take a
book?
HAL: juice mommy
Trainer: sure we will take some juice as well

Jason's chapter didn't have any references and there were a few points he made that I would have liked further details on, specifically:
"... findings which indicate that appropriate guidance may strongly influence ... aspects of the child's language development ... these findings have spawned evaluation and treatment programs in areas that border on the question of intelligence with developmental language data enabling the treatment of some developmentally delayed populations. It has also proven itself a valuable tool in treating schizophrenic thought disorder where clinicians often find themselves unable to capture the communicative problem of patients to assess their intelligence."
Anyhow the chapter ended on saying there was an exciting future ahead, but a quick web search revealed that Jason left the company in 2001, and I had a little trouble finding more about the company, although finally I found this website:

http://www.a-i.com/

Where I had a reasonable conversation with a chatbot that seemed more advanced than the 15-month old one, but the conversation was not disimilar to an original HAL or Eliza program. The website has lots more to explore including an iPhone app, and apparently the ability to train your own bot and make it public; but I couldn't immediately find what I was looking for, which was something about how the program of training infant AIs was progressing. I'd love to see more recent peer-reviewed papers on this stuff. Maybe I'll find it with a bit more looking.

Wednesday, September 22, 2010

Cyc: Parsing the Turing Test

So I just bought my first book on Kindle. Something I had been putting off, since I don't particularly like the Digital Rights Management that allows Amazon to suspend my access to books that I have paid for. And basically I hate paying for things when I can possibly avoid it, however I was gagging to read Parsing the Turing Test, and my impatience, and lack of luggage allowance on my upcoming trip from Japan to the UK, and so I shelled out $50. I feel a little bad about that, although I console myself that some great work has gone into the book and the author's certainly deserve compensation (although I wasn't able to determine that until after I started reading it) and kudos to Amazon for making the book I bought seemlessly available on my MacBook Pro, Kindle, iPad and iPhone. Now that's handy.

Anyhow, so this will likely be the first of many posts related to particular chapters from the book, but first off I was fascinated to read Chapter 16 "Building a Machine Smart Enough to Pass the Turing Test: Could We, Should We, Will We?" by Douglas Lenat; which describes a series of hurdles apparently overcome in phase 1 of the Cyc project such as dropping absolute truth, consistency, reasoning in particular contexts etc; that have allowed them to complete phase 1.

Lenat goes on to talk about now they are priming the knowledge pump to allow the system to learn by itself, but it's not clear to me why the knowledge pump has to be primed so much first. I find myself wondering why can't all knowledge be provided through interactive dialogues with the system, i.e. have the system proffer answers and explain when it is incorrect. In the chapter Lenat gives an example of a computer trying to decide whether "fear" or "sleepiness" is appropriate for the victim of a robbery, and gives first order logic predicates that allows the answer to be inferred. Could the required predicates be inferred by the machine from a human explanation of which emotion is appropriate? I guess that explanations are often more complicated that the questions they answer, raising further questions in turn ...

Lenat refers a number of times to Cyc learning by itself, certain machine learning issues being as yet unsolved. It seems to me the key is some form of representation and learning process that allows the system to have incorrect assertions explained to it in natural language, and have the system update its model accordingly. I am sure I am being hopelessly naive here, but will have fun testing my ideas.

Any which way, this is certainly making me see Prolog in a new light :-) Can't wait to get started on building a system to enter the Loebner prize competition next year. Although there are many more chapters to read and each is turning over sets of my own assumptions. Lenat already lists the "human frailty" deceptions that would be effective/appropriate for passing the test ...

Friday, September 17, 2010

Live ScreenCast Solution

So I (sort of) successfully used UStream to broadcast a lecture for my Internet Programming class last night. After realizing (again) that the Ustream Producer standalone audio doesn't work (at least for me and a few others ), I switched to recording through the browser and that created a stream that my students were able to see, and I was even able to screencast by using CamTwist, and I was subsequently able to download. Woot!

Unfortunately the resolution was such that the students were only able to read the largest of my powerpoint fonts, and it wasn't really practical to see any of the code examples. Fortunately I had cunningly distributed my slides in advance, so the students were just using the ustream screencast to know where to look in the slides, and for synced audio. Somewhat ridiculously the Ustream comments were only available in the live stream, not in the broadcast window, and I didn't want to have the live stream window open, although I guess I could have muted it, but I worried about my browser crashing. Anyhow, we ended up using Skype for chat and backup audio, but that means that the chat is not synced with the recording, which is a shame for those who were not present.

Anyway, I would really like to broadcast in slightly higher definition, and it seems I could pay ~$200 for UStream Producer Pro, but I am worried that the audio will not work, as in the free version of Producer, and that I won't even get a sufficient boost in resolution from the point of view of the students.

So what I would really like to know is if anyone has managed to successfully live screencast from OSX with a resolution sufficient for handling code examples. Either in UStream or some other solution ...

Wednesday, September 15, 2010

REXML and WADL issues in Chapter 2 of OReilly's RESTful Webservicesw

So I am teaching a course on RESTful webservices and using Sam Ruby and Leonard Richardson's OReilly book as the main text. It's a great book, but things have probably changed in a few Ruby libraries since the book's publication in 2008, so a couple of examples in chapter 2 seem to have some problems. I am on OSX 10.6.4 using ruby 1.8.7

The first issue was:
$ ruby delicious-sax.rb tansaku
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/rexml/source.rb:227:in `position': undefined method `stat' for # (NoMethodError)
from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/rexml/parsers/baseparser.rb:144:in `position'
from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/rexml/parsers/sax2parser.rb:176:in `parse'
from delicious-sax.rb:21:in `print_my_recent_bookmarks'
from delicious-sax.rb:30
I had been searching around for anything on this error, including looking at the source, but had made no progress. I tried to subscribe to the REXML mailing list described here:

http://www.germane-software.com/software/rexml/index.html#id2248285

but got no response, so thought I would try ruby-talk. Unfortunately while I am subscribed to ruby-talk and don't seem to be able to unsubscribe I don't seem to be able to post there. I found at least a partial resolution by chatting with capnregx on ruby-lang irc at Freenode. The solution was to call xml.string to dump out a String object rather than passing in a StringIO which the REXML I am using is, somewhat surprisingly, unable to handle.

I also found a problem in the same chapter with another example involving WADL:
$ ruby delicious-wadl-ruby.rb tansaku
./wadl.rb:1248:in `lookup_param': No such param post (ArgumentError)
from ./wadl.rb:1255:in `each_by_param'
from delicious-wadl-ruby.rb:22
I took a brief look at the source and didn't have any immediate insight. After some more support from capnregex I identified a good approach of running the entire program in irb to see what was being passed back from the server. This is the annoying thing about del.icio.us as a programming example, since it is all over https, so no tracing it with tcpdump.

I got as far as seeing that the appropriate data did seem to be returned, but was residing in get.format rather than get.representation which was a REXML:Element rather than a WADL::XMLRepresentation as one would intuitively expect from the 'each_with_params' method call. get.headers.string appeared to contain the desired XML but not in a format (StringIO) that could be parsed the way the program was attempting. Tabbing on get.headers. suggested that each_by_param was an appropriate method, but gave an undefined method error so I gave up there and decided to drop the WADL example for the moment. We'll come back to WADL in a later chapter, but I am becoming increasingly dubious about it's value. At least the REST community does not seem to have sprung into action around it after the effort the book's authors put in to getting it off the ground.

All files required to run the above are in pastebin:

delicious-sax.rb http://pastebin.com/029SnVEW

delicious-wadl-ruby.rb: http://pastebin.com/Nz5hFGRN
wadl.rb (required by above): http://pastebin.com/AKRAETG8
delicious.xml (required by above): http://pastebin.com/7hUH7uwy

Of course you will need to use your own deli.icio.us username/password combinations for the examples to work :-)