“Heaven is a city 15,000 miles square or 6,000 miles around. One side is 245 miles longer than the length of the Great Wall of China. Walls surrounding Heaven are 396,000 times higher than the Great Wall of China and eight times as thick. Heaven has twelve gates, three on each side, and has room for 100,000,000,000 souls. There are no slums. The entire city is built of diamond material, and the streets are paved with gold. All inhabitants are honest and there are no locks, no courts, and no policemen.” -- Reverend Doctor George Hawes, in a sermon
Yesterday I asked my esteemed co-blogger Robin what he would do with “unlimited power”, in order to reveal something of his character. Robin said that he would (a) be very careful and (b) ask for advice. I asked him what advice he would give himself. Robin said it was a difficult question and he wanted to wait on considering it until it actually happened. So overall he ran away from the question like a startled squirrel.
The character thus revealed is a virtuous one: it shows common sense. A lot of people jump after the prospect of absolute power like it was a coin they found in the street.
When you think about it, though, it says a lot about human nature that this is a difficult question. I mean—most agents with utility functions shouldn’t have such a hard time describing their perfect universe.
For a long time, I too ran away from the question like a startled squirrel. First I claimed that superintelligences would inevitably do what was right, relinquishing moral responsibility in toto. After that, I propounded various schemes to shape a nice superintelligence, and let it decide what should be done with the world.
Not that there’s anything wrong with that. Indeed, this is still the plan. But it still meant that I, personally, was ducking the question.
Why? Because I expected to fail at answering. Because I thought that any attempt for humans to visualize a better future was going to end up recapitulating the Reverend Doctor George Hawes: apes thinking, “Boy, if I had human intelligence I sure could get a lot more bananas.”
But trying to get a better answer to a question out of a superintelligence, is a different matter from entirely ducking the question yourself. The point at which I stopped ducking was the point at which I realized that it’s actually quite difficult to get a good answer to something out of a superintelligence, while simultaneously having literally no idea how to answer yourself.
When you’re dealing with confusing and difficult questions—as opposed to those that are straightforward but numerically tedious—it’s quite suspicious to have, on the one hand, a procedure that executes to reliably answer the question, and, on the other hand, no idea of how to answer it yourself.
If you could write a computer program that you knew would reliably output a satisfactory answer to “Why does anything exist in the first place?” or “Why do I find myself in a universe giving rise to experiences that are ordered rather than chaotic?”, then shouldn’t you be able to at least try executing the same procedure yourself?
I suppose there could be some section of the procedure where you’ve got to do a septillion operations and so you’ve just got no choice but to wait for superintelligence, but really, that sounds rather suspicious in cases like these.
So it’s not that I’m planning to use the output of my own intelligence to take over the universe. But I did realize at some point that it was too suspicious to entirely duck the question while trying to make a computer knowably solve it. It didn’t even seem all that morally cautious, once I put in those terms. You can design an arithmetic chip using purely abstract reasoning, but would you be wise to never try an arithmetic problem yourself?
And when I did finally try—well, that caused me to update in various ways.
It does make a difference to try doing arithmetic yourself, instead of just trying to design chips that do it for you. So I found.
Hence my bugging Robin about it.
For it seems to me that Robin asks too little of the future. It’s all very well to plead that you are only forecasting, but if you display greater revulsion to the idea of a Friendly AI than to the idea of rapacious hardscrapple frontier folk...
I thought that Robin might be asking too little, due to not visualizing any future in enough detail. Not the future but any future. I’d hoped that if Robin had allowed himself to visualize his “perfect future” in more detail, rather than focusing on all the compromises he thinks he has to make, he might see that there were futures more desirable than the rapacious hardscrapple frontier folk.
It’s hard to see on an emotional level why a genie might be a good thing to have, if you haven’t acknowledged any wishes that need granting. It’s like not feeling the temptation of cryonics, if you haven’t thought of anything the Future contains that might be worth seeing.
I’d also hoped to persuade Robin, if his wishes were complicated enough, that there were attainable good futures that could not come about by letting things go their own way. So that he might begin to see the future as I do, as a dilemma between extremes: The default, loss of control, followed by a Null future containing little or no utility. Versus extremely precise steering through “impossible” problems to get to any sort of Good future whatsoever.
This is mostly a matter of appreciating how even the desires we call “simple” actually contain many bits of information. Getting past anthropomorphic optimism, to realize that a Future not strongly steered by our utility functions is likely to contain little or no utility, for the same reason it’s hard to hit a distant target while shooting blindfolded...
But if your “desired future” remains mostly unspecified, that may encourage too much optimism as well.
Visualizing Eutopia
Followup to: Not Taking Over the World
“Heaven is a city 15,000 miles square or 6,000 miles around. One side is 245 miles longer than the length of the Great Wall of China. Walls surrounding Heaven are 396,000 times higher than the Great Wall of China and eight times as thick. Heaven has twelve gates, three on each side, and has room for 100,000,000,000 souls. There are no slums. The entire city is built of diamond material, and the streets are paved with gold. All inhabitants are honest and there are no locks, no courts, and no policemen.”
-- Reverend Doctor George Hawes, in a sermon
Yesterday I asked my esteemed co-blogger Robin what he would do with “unlimited power”, in order to reveal something of his character. Robin said that he would (a) be very careful and (b) ask for advice. I asked him what advice he would give himself. Robin said it was a difficult question and he wanted to wait on considering it until it actually happened. So overall he ran away from the question like a startled squirrel.
The character thus revealed is a virtuous one: it shows common sense. A lot of people jump after the prospect of absolute power like it was a coin they found in the street.
When you think about it, though, it says a lot about human nature that this is a difficult question. I mean—most agents with utility functions shouldn’t have such a hard time describing their perfect universe.
For a long time, I too ran away from the question like a startled squirrel. First I claimed that superintelligences would inevitably do what was right, relinquishing moral responsibility in toto. After that, I propounded various schemes to shape a nice superintelligence, and let it decide what should be done with the world.
Not that there’s anything wrong with that. Indeed, this is still the plan. But it still meant that I, personally, was ducking the question.
Why? Because I expected to fail at answering. Because I thought that any attempt for humans to visualize a better future was going to end up recapitulating the Reverend Doctor George Hawes: apes thinking, “Boy, if I had human intelligence I sure could get a lot more bananas.”
But trying to get a better answer to a question out of a superintelligence, is a different matter from entirely ducking the question yourself. The point at which I stopped ducking was the point at which I realized that it’s actually quite difficult to get a good answer to something out of a superintelligence, while simultaneously having literally no idea how to answer yourself.
When you’re dealing with confusing and difficult questions—as opposed to those that are straightforward but numerically tedious—it’s quite suspicious to have, on the one hand, a procedure that executes to reliably answer the question, and, on the other hand, no idea of how to answer it yourself.
If you could write a computer program that you knew would reliably output a satisfactory answer to “Why does anything exist in the first place?” or “Why do I find myself in a universe giving rise to experiences that are ordered rather than chaotic?”, then shouldn’t you be able to at least try executing the same procedure yourself?
I suppose there could be some section of the procedure where you’ve got to do a septillion operations and so you’ve just got no choice but to wait for superintelligence, but really, that sounds rather suspicious in cases like these.
So it’s not that I’m planning to use the output of my own intelligence to take over the universe. But I did realize at some point that it was too suspicious to entirely duck the question while trying to make a computer knowably solve it. It didn’t even seem all that morally cautious, once I put in those terms. You can design an arithmetic chip using purely abstract reasoning, but would you be wise to never try an arithmetic problem yourself?
And when I did finally try—well, that caused me to update in various ways.
It does make a difference to try doing arithmetic yourself, instead of just trying to design chips that do it for you. So I found.
Hence my bugging Robin about it.
For it seems to me that Robin asks too little of the future. It’s all very well to plead that you are only forecasting, but if you display greater revulsion to the idea of a Friendly AI than to the idea of rapacious hardscrapple frontier folk...
I thought that Robin might be asking too little, due to not visualizing any future in enough detail. Not the future but any future. I’d hoped that if Robin had allowed himself to visualize his “perfect future” in more detail, rather than focusing on all the compromises he thinks he has to make, he might see that there were futures more desirable than the rapacious hardscrapple frontier folk.
It’s hard to see on an emotional level why a genie might be a good thing to have, if you haven’t acknowledged any wishes that need granting. It’s like not feeling the temptation of cryonics, if you haven’t thought of anything the Future contains that might be worth seeing.
I’d also hoped to persuade Robin, if his wishes were complicated enough, that there were attainable good futures that could not come about by letting things go their own way. So that he might begin to see the future as I do, as a dilemma between extremes: The default, loss of control, followed by a Null future containing little or no utility. Versus extremely precise steering through “impossible” problems to get to any sort of Good future whatsoever.
This is mostly a matter of appreciating how even the desires we call “simple” actually contain many bits of information. Getting past anthropomorphic optimism, to realize that a Future not strongly steered by our utility functions is likely to contain little or no utility, for the same reason it’s hard to hit a distant target while shooting blindfolded...
But if your “desired future” remains mostly unspecified, that may encourage too much optimism as well.