Tuesday, November 29, 2011

Confusing Mathematics Definitions

What is it with mathematical defintions? Are they deliberately trying to be obtuse? Consider Rosen's definition of a primitive root:
A "primitive root" modulo a prime p is an integer r in Zp such that every nonzero element of Zp is a power of r.
Wikipedia was no clearer
a primitive root modulo n is any number g with the property that any number coprime to n is congruent to a power of g modulo n. In other words, g is a generator of the multiplicative group of integers modulo n. That is, for every integer a coprime to n, there is an integer k such that gk ≡ a (mod n). Such k is called the index or discrete logarithm of a to the base g modulo n.
When after I had unpacked these definitions and looked at several examples my idea of a clear definition would be:
Given a prime p, a “primitive root modulo p” is an integer r such that all of the integers from 1 to p-1 appear in the set Zp. The set Zp consists of powers of r modulo p.
Of course, none of these definitions really make much sense without an example:

Example: Since all the integers from 1 to 10 appear in Z11 when r=2, 2 is a primitive root of 11. Powers of 2 modulo 11: 2^1 = 2, 2^2 = 4, 2^3 = 8, 2^4 = 5, 2^5 = 10, 2^6 = 9, 2^7 = 7, 2^8 = 3, 2^10 = 1.

Friday, November 25, 2011

Taxonomy of Game Mechanisms

I've been intrigued by a parallel I've noticed between games created by my students and some concepts from the Stanford AI class, e.g. partial observability

In general in some basic 2d games partial observability of the environment makes the game a lot more fun and I was just playing a sidescroller by one student that imported a somewhat similar mechanism to the old missile command arcade game, and I was wondering if one could create a game mechanism taxonomy like they have for AI agents/environments and use it to predict which sorts of game mechanism combination would be fun and which would not ...

http://www.ferzkopp.net/joomla/science-mainmenu-15/9-ferzkopps-philosophy-work/77-a-heuristic-taxonomy-of-computer-games

http://www.iexbeta.com/wiki/index.php/Video_Game_Taxonomy

but I'm thinking of something finer grained that might link up to the abilities and types of agents playing the games ...

It also makes me think about why watching certain types of sports are compelling. Watching people dodge each other seems to set off interest centers in our brain related to our evolutionary development; being able to dodge away from a sabre-tooth tiger being highly advantageous; which also makes me think of genetic algorithm simulations that show the evolution of rabbits dodging foxes or similar from a talk at the University of Sussex back in the day ...

Wednesday, November 23, 2011

Stanford AI Class Midterm

Here's my notes on the Stanford AI midterm

Q.1

a) There exists (at least) one environment in which every agent is rational

I take this to mean one environment in which all agents regardless of behavior will optimally achieve their goals, which i think must be false since if you have two agents and we set it up to the first doesn't move and the second does. I guess if we set up so that every action incurs no cost, and every agent is rewarded maximally independent of its behavior

so I am making both the first two true, although I have a strong feeling the instructors wand the first one to be answered false, but I haven't come up with the necessary counter example

regarding the 15 puzzle

a lookup reflex agent will need to have moves for 16! positions that the puzzle can be in, i.e. > 2 trillion, an agent that searches will look at the starting state and consider all the possible moves branching out from there. A depth first search agent will only store a single route to the solution, and then discard that not taking up much space.

There is an upper bound of 4 moves in each state, so the max branching factor of the tree is 4, so depending on the number of steps in the solution there will be #steps^4 nodes in the tree? maximum path to the solution? maximum of 15 moves to get to solution? I think not.., more like 15 moves*16moves, but even then it's much less than 2 trillion.

So I'm probably not reasoning soundly here, but still the answer feels right, i.e. that the lookup table will be larger in terms of memory, but the agent that searches will not do any better ...



Q.2

A*

always expand the the node with the minimum value of the function f = g+h
g(path) = path cost
h(path) = h(state) = estimated distance to the goal

layer 2 is labelled a1-5 from left to right and layer 3 is labelled b1-6 also from left to right

f(top-node) = 0+15 Expanded 1st

f(a1) = 10+11 = 21
f(a2) = 10+8 = 18 Expanded 4th
f(a3) = 10+7 = 17 Expanded 3rd
f(a4) = 10+6 = 16 Expanded 2nd
f(a5) = 10+10 = 20 Expanded 5th

f(b4) = 20+5 = 25
f(b5) = 20+20 = 40

f(b2) = 20+3 = 23
f(b3) = 20+9 = 29

f(b6) = 20 + 0 = 20 Expanded 6th

My checking seems to match my initial guess

Admissible --> h(s) < true cost, i.e. h never overestimates the distance to the goal

and I think this heuristic is not admissible since f(b5) heuristic is 20 when it is only 10 away from the goal, although one might also question whether other b nodes can actually reach the goal node, but they are all underestimating under any assumptions I can think off.



Q.3

P(H) = 0.3
P(F) = 0.7

Q.4

unfair coin
P(HH) = 0.04 + P(H) * P(H) --> P(H) = 0.2
--> P(T) = 0.8
--> P(TT) = 0.64



Q.5

P1(H) = 0.5
P2(H) = 1

pick a coin at random and see a head, what is P(P2|H)

So i think this should be Bayes rule

P(P2|H) = P(H|P2)P(P2)/P(H) where P(H) = P(H|P2)P(P2) + P(H|P1)P(P1)
= 1*0.5/[1*0.5+ 0.5*0.5]
= 0.5/[0.5+0.25]
= 50/75 = 2/3

sense of multiple worlds here very strong. There is the world where we chose the fair coin and a world where we chose the loaded coin. There are two possible coin flip results in the fair world, and only 1 in the loaded world. The evidence suggests we may be in either of two of these three possible states --> but the loaded world must be equally likely to the fair world, so we are twice as likely to be in the loaded world than the fair world ...

now if we see two heads in a row, what is P(P2|HH)

P(P2|HH) = P(HH|P2)P(P2)/P(HH) where P(HH) = P(HH|P2)P(P2) + P(HH|P1)P(P1)
= 1*0.5/[1*0.5+ 0.25*0.5]
= 0.5/[0.5+0.125]
= 500/625 = 8/10

now there is still only one possible loaded world situation, but in the fair world we have four possible coin outputs we might have seen. If we are in the fair world we are in precisely 1 of those four. The chances of being in the loaded world is now four times more likely than being in the fair world

Not sure if I was supposed to be drawing a bayes net here, or if that would help and/or adjust the answer ...


Q.6

very confused now - need to rematch all on C.I.

3.34 D-separation series is the key here

"Any two variables are independent if they are not linked by just unknown variables"

re-looked at all quizzes and homeworks, and changed my initial answer slightly to indicate that A was not conditional independent of C given G, because G tells us about E and E links A and B ERROR HERE SINCE QUESTION IS ABOUT A & C

I am still on the fence about whether A should be conditionally independent of B given F, but in 3.36 F was independent of A given knowledge of H which told us about a child var that linked A and F, but that didn't tell us about the parent of the child var that linked A and F

Q.7

So Hmwk 2.3. explanation is my friend here:


P(B|C) = P(B|C,A)P(A|C) + P(B|C,!A)P(!A|C)
P(B|C) = P(B|A)P(C|A)P(A)/P(C) + P(B|!A)P(C|!A)P(!A)/P(C)
P(B|C) = 0.2*0.8*0.5/0.6 + 0.2*0.4*0.5/0.6 = 0.2

P(C) = P(C|A)P(A) + P(C|!A)P(!A)
P(C) = 0.8*0.5 + 0.4*0.5 = 0.6

P(C|B) = P(C|B,A)P(A|B) + P(C|B,!A)P(!A|B)
P(C|B) = P(C|A)P(B|A)P(A)/P(B) + P(C|!A)P(B|!A)P(!A)/P(B)
P(C|B) = 0.8*0.2*0.5/0.2 + 0.4*0.2*0.5/0.2

P(B) = P(B|A)P(A) + P(B|!A)P(!A)
P(B) = 0.2*0.5 + 0.2*0.5 = 0.2

again, unsure of calculation error but looks like knowledge of each other does not help


Q.8

so my intuition for P(OLD) was actually the MaxLikelihood answer

I redid my spreadsheet for this, eventually getting to what I hope is the right answer. The fractions work out reasonable which suggests to me it might be right as that is what I saw thrun doing in Hmwk 3.2a and b. Feels like I should be able to do this with just pencil and paper, but I don't trust myself to make minor mistakes which is why I put it in the spreadsheet - although I still worry I will have spreadsheet errors ...

ERROR HERE SINCE I DID 4/7 INSTEAD OF 5/7

Q.9

rewatched the K-Nearest Neighbor 5.33 videos, but my initial guess of 7 still seems correct - the suggestion about ties just seems to be confusing since they were never mention in the lecture and superficially all the points seem at different distances from the query point


Q.10

now first time through I just drew a graph and guessed the numbers ... how about linear regression in octave?

octave-3.4.0:2> A = [1,2;3,5.2;4,6.8;5,8.4;9,14.8]
octave-3.4.0:5> X = A(:,1)
octave-3.4.0:6> Y = A(:,2)
octave-3.4.0:10> sum((Y-X).^2)
ans = 58.880

gives me sum of square of differences, but not sure what to do next ...

Going back to 5.36 in the lectures I see thrun deriving

w_0 = 1/M * sum(y) - w_1/M * sum(x)

and

w_1 = M * sum(x*y) - sum(x)*sum(y) * [M * sum(x.^2) - (sum(x))^2]^-1

and I didn't trust myself to derive these correctly

so I can vectorize these:

M = size(X)(1)
w_0 = 1/M * sum(Y) - w_1/M * sum(X)
w_1 = (M * X'*Y - sum(X)*sum(Y)) * [M * sum(X.^2) - (sum(X))^2]^-1

gives
w_1 = 1.6 guessed 0.642
w_0 = 0.40000 guessed 1

so plotting those I get an exact match suggesting I could have intuited this without the math if I had drawn it precisely ... :-) my first guess based on hand plot was well off ... :-)

plot(X,Y)
hold on
XT = [0;1;2;3;4;5;6;7;8;9]
YT = w_0 + w_1 * XT
plot(XT,YT,'r')

still pondering over relation between NNs, linear regression and statistical significance


Q.11

initial intuition appears wrong after watching 6.5 on k means clustering which suggests the initial division of the space is done by drawing a line of equidistance between C1 and C2 indicating that only one point will get associated with C1

Q.12

so this should be straightforward

Valid --> always true
Satisfiable --> true or false
Unsatisfiable --> always false

My initial guess on (A&&!A)->(B->C) appears wrong as I, as usual, incorrectly remembered the logical form of implication

Truth table appears to confirm my intuition that (A->B)&&(B->C)&&(C->A) is satisfiable - my intuition based on finding cases where it would be false and where it would be true

My intuition about the equivalence of A->B&&B->C and A->C seems disproved by truth table - if A is true and B is false and C is true then A->B is false and A->C is true COUNTEREXAMPLE


Q.13

seemed possible to answer this from common sense rather than applying any particular knowledge from the course

my second pass on it including review of the clarifications doesn't throw up anything of concern ...


Q.14

could spin wheels here and the answer from value iteration might well be my best guess, which is subtract five going back into each of the states

actually explanation in 9.18 quiz gives solution for deterministic case giving me higher confidence here.


Q.15

mental mistake - thought that laplacian smoothing would involve +4 on the denominator since there are four possible transitions, but it is only +2 as we are simply considering possible transitions from a single state in a two state model?

Also I had them grouped wrong initially - need to group by state we are coming from

AAAAB

F(A|A) = 3
F(B|A) = 1
F(A|B) = 0
F(B|B) = 0


P(A_0) = (1+1)/(1+2)
P(A|A) = (3+1)/(4+2)
P(A|B) = (0+1)/(0+2)

Friday, November 18, 2011

intuitive motion specifier

I keep having an idea in slightly different forms about how it would be cool to be able to move elements in some GUI and then have the code for doing that appear automatically. I was first thinking about this in the context of having my young son learning to program. I tried Alice and Scratch and a few other kid friendly software kits, but surprisingly none of them had the ability for the user to move an element around, i.e. intuitively specify what it's behaviour should be, and then have that stored as a behaviour that could be repeated, reversed etc.

The same idea came to me again in the context of game design, e.g. in a game editor like GameMaker or Unity3D is would be great to be able to "show" the system what kind of motion you wanted, and have the corresponding behaviour appear in the editor in a form that could be manipulated. Of course it wouldn't work simply for all behaviours. Getting one object to follow another would be trickier.

The thing at the moment is that I can't see how to add such functionality to any of the existing systems that I have looked at without geting very deeply involved in them. I could build a demonstration interface in JavaScript, but by itself it wouldn't be able to do very much. I have the feeling that eventually this functionality will appear somewhere and some people will go "oh cool" and I will be kicking myself for having not done anything with the idea, or it is already present in some software somewhere that I just haven't found yet ...