My argument here is that it doesn’t, it’s an empty formalism with no practical application. … There are literal physical limitations to computational resources.
Where do I even start with this? That argument proves too much. You could apply the same argument to engineering in general. “Well, it would take infinite computing power to sum up an integral, so I guess we can’t ever use numerical approximations.” Please read through An Intuitive Explanation of Solomonoff Induction. In particular, I will highlight:
But we can find shortcuts. Suppose you know that the exact recipe for baking a cake asks you to count out one molecule of H2O at a time until you have exactly 0.5 cups of water. If you did that, you might not finish the cake before the heat death of the universe. But you could approximate that part of the recipe by measuring out something very close to 0.5 cups of water, and you’d probably still end up with a pretty good cake.
Similarly, once we know the exact recipe for finding truth, we can try to approximate it in a way that allows us to finish all the steps sometime before the sun burns out.
Now you are literally using Zeno to steal cattles.
The problem with your very wide perspective seems to be that you are basically taking Pascal’s wager.
But for now I honestly don’t have time to continue this argument, especially if you’re gonna take the read-this-book-peasant style approach to filling understanding/interpretation gaps.
Though this discussion has given me an idea about a “Why genetically engineered parrots will bring about the singularity” as a counter-argument to this kind of logic.
Other than that, congrats you “win”, I’m afraid however that I no further understand your position or why you hold it then when we began. Nor do I understand what would change your position or what it’s principal pillars are… :/
You’re fighting a strawman, George. You clearly do not understand our real arguments. Attempts to point this out have only been met with your hostility. I do not have the patience to tutor one so unwilling to study.
If your post involves topics that were already covered in the sequences you should build on them, not repeat what has already been said. If your post makes mistakes that were warned against in the sequences, you’ll likely be downvoted and directed to the sequence in question.
That is exactly what is happening here. The symptoms of this dialogue are diagnostic of an inferential gap. Your case is not the first. Read the Sequences, George. Especially the parts we’ve linked you to.
On the other hand, we’re well aware that it can take a long time to read through several years worth of blog posts, so we’ve labeled the most important as “core sequences”. Looking through the core sequences should be enough preparation for most of the discussions that take place here. We do recommend that you eventually read them all, but you can take your time getting through them as you participate. Before discussing a specific topic, consider looking to see if if there is any obvious sequence on that topic.
Again, I think you are veering into religious thinking here, just because something has Eliezer Yudkowsky’s name on it it doesn’t mean that it’s true.
Personally I know the essay and I happen to fundamentally disagree with it. I’m a pragmatic bayesian at the best of times and a radical empiricist at my worst, so the kind of view Eliezer espouses has very little sway on me.
But despite the condescending voice you give this reply, if I am to make a very course assumption, I can probably summarize our difference here as me putting too much weight behind error accumulation in my model of the world or you not taking into account how error accumulation works (not saying one perspective or the other is correct, again, this is I think where we differ and I assume our difference is quite fundamental in nature).
Given the fact that your arguments seem to be mainly based on simple formal model working in what I see as an “ideal” universe, from which you then draw your chain of inferences leading to powerful AGI, I assume you might have a background in mathematics and/or philosophy.
I do think that my article is actually rather bad at addressing AGI from this angle.
I’m honestly unsure if the issue could even be addressed from this perspective, but I do think it might be worth a broader piece addressing why this perspective is flawed (i.e. an argument for why a perspective/model/world-view based on a long inferential distances is inherently flawed).
So, I honestly think this conversation might not have been pointless after all, at least not from my side, because it gives me an idea for an essay and a reason to write it.
Granted, I assume you have still gained nothing in terms of understanding my perspective, because quite frankly I did a bad job at addressing it in such a way that you would understand, I was not addressing the correct problem. So for that I am sorry.
Then again, I might be making too many assumptions about your perspective and background here, stacking imperfect inference upon imperfect inference and creating a caricature that does not match reality in any meaningful way.
Where do I even start with this? That argument proves too much. You could apply the same argument to engineering in general. “Well, it would take infinite computing power to sum up an integral, so I guess we can’t ever use numerical approximations.” Please read through An Intuitive Explanation of Solomonoff Induction. In particular, I will highlight:
Now you are literally using Zeno to steal cattles.
The problem with your very wide perspective seems to be that you are basically taking Pascal’s wager.
But for now I honestly don’t have time to continue this argument, especially if you’re gonna take the read-this-book-peasant style approach to filling understanding/interpretation gaps.
Though this discussion has given me an idea about a “Why genetically engineered parrots will bring about the singularity” as a counter-argument to this kind of logic.
Other than that, congrats you “win”, I’m afraid however that I no further understand your position or why you hold it then when we began. Nor do I understand what would change your position or what it’s principal pillars are… :/
You’re fighting a strawman, George. You clearly do not understand our real arguments. Attempts to point this out have only been met with your hostility. I do not have the patience to tutor one so unwilling to study.
If you have any desire to cross the inferential gap, I will refer you to the LessWrong FAQ
That is exactly what is happening here. The symptoms of this dialogue are diagnostic of an inferential gap. Your case is not the first. Read the Sequences, George. Especially the parts we’ve linked you to.
Again, I think you are veering into religious thinking here, just because something has Eliezer Yudkowsky’s name on it it doesn’t mean that it’s true.
Personally I know the essay and I happen to fundamentally disagree with it. I’m a pragmatic bayesian at the best of times and a radical empiricist at my worst, so the kind of view Eliezer espouses has very little sway on me.
But despite the condescending voice you give this reply, if I am to make a very course assumption, I can probably summarize our difference here as me putting too much weight behind error accumulation in my model of the world or you not taking into account how error accumulation works (not saying one perspective or the other is correct, again, this is I think where we differ and I assume our difference is quite fundamental in nature).
Given the fact that your arguments seem to be mainly based on simple formal model working in what I see as an “ideal” universe, from which you then draw your chain of inferences leading to powerful AGI, I assume you might have a background in mathematics and/or philosophy.
I do think that my article is actually rather bad at addressing AGI from this angle.
I’m honestly unsure if the issue could even be addressed from this perspective, but I do think it might be worth a broader piece addressing why this perspective is flawed (i.e. an argument for why a perspective/model/world-view based on a long inferential distances is inherently flawed).
So, I honestly think this conversation might not have been pointless after all, at least not from my side, because it gives me an idea for an essay and a reason to write it.
Granted, I assume you have still gained nothing in terms of understanding my perspective, because quite frankly I did a bad job at addressing it in such a way that you would understand, I was not addressing the correct problem. So for that I am sorry.
Then again, I might be making too many assumptions about your perspective and background here, stacking imperfect inference upon imperfect inference and creating a caricature that does not match reality in any meaningful way.