I’m neither Eliezer nor (so far as you know) an AGI, but I think (1) I couldn’t be convinced by evidence that beliefs should not respond to evidence but (2) I could be led by evidence to abandon my belief that they should. (Probably along with most of my other beliefs.) What it would take for that would be a systematic failure of beliefs arrived at by assessing evidence to match any better with future evidence than beliefs arrived at in other ways. I think that would basically require that future evidence to be random; in fact that’s roughly what “random” means. I’m not sure that I can actually imagine a world like that, though.
I think Doug should amend his criterion to say ”… with no sign of any increase in entropy elsewhere”. But it seems to me that a being with no power other than (say) being able to induce modestly sized temperature gradients would not thereby qualify to be called a god. (If physicists announce tomorrow that the second law of thermodynamics can be cheated by some cunning technique with the word “quantum” in it, are we suddenly all gods?) And if the power is sufficiently limited (it takes time, and only operates on a small region of space, and the temperature gradient induced is very small) then it doesn’t even qualify as god-like power in my book. But I expect Doug wasn’t being perfectly serious.
What do you mean by “doesn’t present any serious problems”? That you have no trouble thinking of evidence that would suffice to convince you of those things? (If so, I agree.) (It’s not fair to blame Ben for picking “Christianity is true” instead of something more specific; he was just copying Eliezer.)
“I’m not sure that I can actually imagine a world like that, though.”
A computer simulation with infinite processing power that runs a person from an initial state (the standard one is solely based on their genetics, but for your experiment we could use a brain download of a given human from partway through their life) through all possible sequences of sensory inputs.
I’m neither Eliezer nor (so far as you know) an AGI, but I think (1) I couldn’t be convinced by evidence that beliefs should not respond to evidence but (2) I could be led by evidence to abandon my belief that they should. (Probably along with most of my other beliefs.) What it would take for that would be a systematic failure of beliefs arrived at by assessing evidence to match any better with future evidence than beliefs arrived at in other ways. I think that would basically require that future evidence to be random; in fact that’s roughly what “random” means. I’m not sure that I can actually imagine a world like that, though.
I think Doug should amend his criterion to say ”… with no sign of any increase in entropy elsewhere”. But it seems to me that a being with no power other than (say) being able to induce modestly sized temperature gradients would not thereby qualify to be called a god. (If physicists announce tomorrow that the second law of thermodynamics can be cheated by some cunning technique with the word “quantum” in it, are we suddenly all gods?) And if the power is sufficiently limited (it takes time, and only operates on a small region of space, and the temperature gradient induced is very small) then it doesn’t even qualify as god-like power in my book. But I expect Doug wasn’t being perfectly serious.
What do you mean by “doesn’t present any serious problems”? That you have no trouble thinking of evidence that would suffice to convince you of those things? (If so, I agree.) (It’s not fair to blame Ben for picking “Christianity is true” instead of something more specific; he was just copying Eliezer.)
“I’m not sure that I can actually imagine a world like that, though.”
A computer simulation with infinite processing power that runs a person from an initial state (the standard one is solely based on their genetics, but for your experiment we could use a brain download of a given human from partway through their life) through all possible sequences of sensory inputs.