So—I am still having issues parsing this, and I am persisting because I want to understand the argument, at least. I may or may not agree, but understanding it seems a reasonable goal.
The builders know, of course, that this is much riskier than it seems, because its success would render their own observations extremely rare.
The success of the self-modifying AI would make the builders of that AI’s observations extremely rare… why? Because the AI’s observations count, and it is presumably many orders of magnitude faster?
For a moment, I will assume I have interpreted that correctly. So? How is this risky, and how would creating billions of simulated humanities change that risk?
I think the argument is that—somehow—the overwhelming number of simulated humanities somehow makes it likely that the original builders are actually a simulation of the original builders running under an AI? How would this make any difference? How would this be expected to “percolate up” thru the stack? Presumably somewhere there is the “original” top level group of researchers still, no? How are they not at risk?
How is it that a builder’s observations are ok, the AI’s are bad, but the simulated humans running in the AI are suddenly good?
I think, after reading what I have, that this is the same fallacy I talked about in the other thread—the idea that if you find yourself in a rare spot, it must mean something special, and that you can work the probability of that rareness backwards to a conclusion. But I am by no means sure, or even mostly confident, that I am interpreting the proposal correctly.
So—I am still having issues parsing this, and I am persisting because I want to understand the argument, at least. I may or may not agree, but understanding it seems a reasonable goal.
The success of the self-modifying AI would make the builders of that AI’s observations extremely rare… why? Because the AI’s observations count, and it is presumably many orders of magnitude faster?
For a moment, I will assume I have interpreted that correctly. So? How is this risky, and how would creating billions of simulated humanities change that risk?
I think the argument is that—somehow—the overwhelming number of simulated humanities somehow makes it likely that the original builders are actually a simulation of the original builders running under an AI? How would this make any difference? How would this be expected to “percolate up” thru the stack? Presumably somewhere there is the “original” top level group of researchers still, no? How are they not at risk?
How is it that a builder’s observations are ok, the AI’s are bad, but the simulated humans running in the AI are suddenly good?
I think, after reading what I have, that this is the same fallacy I talked about in the other thread—the idea that if you find yourself in a rare spot, it must mean something special, and that you can work the probability of that rareness backwards to a conclusion. But I am by no means sure, or even mostly confident, that I am interpreting the proposal correctly.
Anyone want to take a crack at enlightening me?