I guess you down-voters of me felt quite rational when doing so.
And this is precisely the reason I seldom post here, and only read a few posters that I know are rational from their own work on the net, not from what they write here:
There are too many fake rationalists here. The absence of any real arguments either way to my article above, is evidence of this.
My Othello/Reversi example above was easy to understand, and a very central problem in AI systems, so it should be of interest to real rationalists interested in AI, but there is only negative reaction instead, from people I guess have not even made a decent game playing AI, but nevertheless have strong opinions on how they must be.
So, for getting intelligent rational arguments on AI, this community is useless, as opposed to Yudkowsky, Schmidhuber, Hansen, Tyler, etc. which has shown on their own sites that they have something to contribute.
To get real results in AI and rationality, I do my own math and science.
Your Othello Reversi example is fundamentally flawed, but it may not seem like it unless you realize that at LW the tradition is to say that utility is linear in paperclips to Clippy. That may be our fault, but there’s your explanation. “Winning 60-0”, to us using our jargon, is equivalent to one paperclip, not 60. And “winning 33-31″ is also equivalent to one paperclip, not 33. (or they’re both equivalent to x paperclips, whatever)
So when I read your example, I read it as “80% chance of 1 paperclip, or 90% chance of 1 paperclip”.
I’m sure it’s very irritating to have your statement miscommunicated because of a jargon difference (paperclip = utility rather than f(paperclip) = utility)! I encourage you to post anyway, and begin with the assumption that we misunderstand you rather than the assumption that we are “fake rationalists”, but realize that in the current environment (unfortunately or not, but there it is) the burden of communication is on the poster.
I guess you down-voters of me felt quite rational when doing so.
And this is precisely the reason I seldom post here, and only read a few posters that I know are rational from their own work on the net, not from what they write here:
There are too many fake rationalists here. The absence of any real arguments either way to my article above, is evidence of this.
My Othello/Reversi example above was easy to understand, and a very central problem in AI systems, so it should be of interest to real rationalists interested in AI, but there is only negative reaction instead, from people I guess have not even made a decent game playing AI, but nevertheless have strong opinions on how they must be.
So, for getting intelligent rational arguments on AI, this community is useless, as opposed to Yudkowsky, Schmidhuber, Hansen, Tyler, etc. which has shown on their own sites that they have something to contribute.
To get real results in AI and rationality, I do my own math and science.
Your Othello Reversi example is fundamentally flawed, but it may not seem like it unless you realize that at LW the tradition is to say that utility is linear in paperclips to Clippy. That may be our fault, but there’s your explanation. “Winning 60-0”, to us using our jargon, is equivalent to one paperclip, not 60. And “winning 33-31″ is also equivalent to one paperclip, not 33. (or they’re both equivalent to x paperclips, whatever)
So when I read your example, I read it as “80% chance of 1 paperclip, or 90% chance of 1 paperclip”.
I’m sure it’s very irritating to have your statement miscommunicated because of a jargon difference (paperclip = utility rather than f(paperclip) = utility)! I encourage you to post anyway, and begin with the assumption that we misunderstand you rather than the assumption that we are “fake rationalists”, but realize that in the current environment (unfortunately or not, but there it is) the burden of communication is on the poster.