Again, I think you are veering into religious thinking here, just because something has Eliezer Yudkowsky’s name on it it doesn’t mean that it’s true.
Personally I know the essay and I happen to fundamentally disagree with it. I’m a pragmatic bayesian at the best of times and a radical empiricist at my worst, so the kind of view Eliezer espouses has very little sway on me.
But despite the condescending voice you give this reply, if I am to make a very course assumption, I can probably summarize our difference here as me putting too much weight behind error accumulation in my model of the world or you not taking into account how error accumulation works (not saying one perspective or the other is correct, again, this is I think where we differ and I assume our difference is quite fundamental in nature).
Given the fact that your arguments seem to be mainly based on simple formal model working in what I see as an “ideal” universe, from which you then draw your chain of inferences leading to powerful AGI, I assume you might have a background in mathematics and/or philosophy.
I do think that my article is actually rather bad at addressing AGI from this angle.
I’m honestly unsure if the issue could even be addressed from this perspective, but I do think it might be worth a broader piece addressing why this perspective is flawed (i.e. an argument for why a perspective/model/world-view based on a long inferential distances is inherently flawed).
So, I honestly think this conversation might not have been pointless after all, at least not from my side, because it gives me an idea for an essay and a reason to write it.
Granted, I assume you have still gained nothing in terms of understanding my perspective, because quite frankly I did a bad job at addressing it in such a way that you would understand, I was not addressing the correct problem. So for that I am sorry.
Then again, I might be making too many assumptions about your perspective and background here, stacking imperfect inference upon imperfect inference and creating a caricature that does not match reality in any meaningful way.
Again, I think you are veering into religious thinking here, just because something has Eliezer Yudkowsky’s name on it it doesn’t mean that it’s true.
Personally I know the essay and I happen to fundamentally disagree with it. I’m a pragmatic bayesian at the best of times and a radical empiricist at my worst, so the kind of view Eliezer espouses has very little sway on me.
But despite the condescending voice you give this reply, if I am to make a very course assumption, I can probably summarize our difference here as me putting too much weight behind error accumulation in my model of the world or you not taking into account how error accumulation works (not saying one perspective or the other is correct, again, this is I think where we differ and I assume our difference is quite fundamental in nature).
Given the fact that your arguments seem to be mainly based on simple formal model working in what I see as an “ideal” universe, from which you then draw your chain of inferences leading to powerful AGI, I assume you might have a background in mathematics and/or philosophy.
I do think that my article is actually rather bad at addressing AGI from this angle.
I’m honestly unsure if the issue could even be addressed from this perspective, but I do think it might be worth a broader piece addressing why this perspective is flawed (i.e. an argument for why a perspective/model/world-view based on a long inferential distances is inherently flawed).
So, I honestly think this conversation might not have been pointless after all, at least not from my side, because it gives me an idea for an essay and a reason to write it.
Granted, I assume you have still gained nothing in terms of understanding my perspective, because quite frankly I did a bad job at addressing it in such a way that you would understand, I was not addressing the correct problem. So for that I am sorry.
Then again, I might be making too many assumptions about your perspective and background here, stacking imperfect inference upon imperfect inference and creating a caricature that does not match reality in any meaningful way.