So I just bought my first book on Kindle. Something I had been putting off, since I don't particularly like the Digital Rights Management that allows Amazon to suspend my access to books that I have paid for. And basically I hate paying for things when I can possibly avoid it, however I was gagging to read Parsing the Turing Test, and my impatience, and lack of luggage allowance on my upcoming trip from Japan to the UK, and so I shelled out $50. I feel a little bad about that, although I console myself that some great work has gone into the book and the author's certainly deserve compensation (although I wasn't able to determine that until after I started reading it) and kudos to Amazon for making the book I bought seemlessly available on my MacBook Pro, Kindle, iPad and iPhone. Now that's handy.
Anyhow, so this will likely be the first of many posts related to particular chapters from the book, but first off I was fascinated to read Chapter 16 "Building a Machine Smart Enough to Pass the Turing Test: Could We, Should We, Will We?" by Douglas Lenat; which describes a series of hurdles apparently overcome in phase 1 of the Cyc project such as dropping absolute truth, consistency, reasoning in particular contexts etc; that have allowed them to complete phase 1.
Lenat goes on to talk about now they are priming the knowledge pump to allow the system to learn by itself, but it's not clear to me why the knowledge pump has to be primed so much first. I find myself wondering why can't all knowledge be provided through interactive dialogues with the system, i.e. have the system proffer answers and explain when it is incorrect. In the chapter Lenat gives an example of a computer trying to decide whether "fear" or "sleepiness" is appropriate for the victim of a robbery, and gives first order logic predicates that allows the answer to be inferred. Could the required predicates be inferred by the machine from a human explanation of which emotion is appropriate? I guess that explanations are often more complicated that the questions they answer, raising further questions in turn ...
Lenat refers a number of times to Cyc learning by itself, certain machine learning issues being as yet unsolved. It seems to me the key is some form of representation and learning process that allows the system to have incorrect assertions explained to it in natural language, and have the system update its model accordingly. I am sure I am being hopelessly naive here, but will have fun testing my ideas.
Any which way, this is certainly making me see Prolog in a new light :-) Can't wait to get started on building a system to enter the Loebner prize competition next year. Although there are many more chapters to read and each is turning over sets of my own assumptions. Lenat already lists the "human frailty" deceptions that would be effective/appropriate for passing the test ...
Wednesday, September 22, 2010
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment