[edit: yeah on slower reflection, I think this was guessable but not obvious before papers were published that clarify this perspective.]
and they were blindsided by alphago, whereas @jacob_cannell and I could post screenshots of our old google hangouts conversation from january 2016 where we had been following the go ai research and had sketched out the obvious next additions that in fact ended up being a reasonable guess at what would work. we were surprised it worked quite as well as it did quite so soon, and I lost a bet that it wouldn’t beat lee sedol overall, but dang it’s frustrating how completely blindsided the aixi model was by the success, and yet it stuck around.
You mean shouldn’t have existed?
no I mean was always a deeply confused question whose resolution is to say that the question is invalid rather than to answer—not “shouldn’t have been asked”, but “was asking about a problem that could not have been in the territory because the model was invalid”. How do you model embedded agency? by giving up on the idea that there are coherent ways to separate the universe completely. the ideal representation of friendliness can be applied from a god’s-eye perspective to any two arbitrary blocks of matter to ask how friendly they have been to each other over a particular time period.
but maybe that was what they were asking the whole time, and the origin of my frustration was the fact that they thought they had a gold standard to compare to.
yeah it does seem like probably a lot of why this seems so obvious to me is that I was having inklings of the idea that you need smooth representation of agency and friendliness, and then discovering agents dropped and nailed down what I was looking for and now I just think it’s obvious and have a hard time imagining it not being.
You mean shouldn’t have existed?
Many did back in the day...very vociferously in some cases.
LW/Miri has a foundations problem. The foundational texts weren’t written by someone with knowledge of AI, or the other subjects.
[edit: yeah on slower reflection, I think this was guessable but not obvious before papers were published that clarify this perspective.]
and they were blindsided by alphago, whereas @jacob_cannell and I could post screenshots of our old google hangouts conversation from january 2016 where we had been following the go ai research and had sketched out the obvious next additions that in fact ended up being a reasonable guess at what would work. we were surprised it worked quite as well as it did quite so soon, and I lost a bet that it wouldn’t beat lee sedol overall, but dang it’s frustrating how completely blindsided the aixi model was by the success, and yet it stuck around.
no I mean was always a deeply confused question whose resolution is to say that the question is invalid rather than to answer—not “shouldn’t have been asked”, but “was asking about a problem that could not have been in the territory because the model was invalid”. How do you model embedded agency? by giving up on the idea that there are coherent ways to separate the universe completely. the ideal representation of friendliness can be applied from a god’s-eye perspective to any two arbitrary blocks of matter to ask how friendly they have been to each other over a particular time period.
but maybe that was what they were asking the whole time, and the origin of my frustration was the fact that they thought they had a gold standard to compare to.
yeah it does seem like probably a lot of why this seems so obvious to me is that I was having inklings of the idea that you need smooth representation of agency and friendliness, and then discovering agents dropped and nailed down what I was looking for and now I just think it’s obvious and have a hard time imagining it not being.