why is it that once you try out being in a rationalist community you can’t bear the thought of going back
Nitpick: It took me a bit to realize you meant “going back to being among non-rationalists” rather than “going back to the meeting”.
Or you could start talking about feminism, in which case you can say pretty much anything and it’s bound to offend someone. (Did that last sentence offend you? Pause and reflect!)
Unfortunately I recognize that as the bitter truth, so it’s of no use for me for training purposes.
Here’s something which might work as an indignation test—could it be a good move for an FAI to set a limit on human intelligence?
If an AI can be built, it has been shown that humanity is an AI-creating species. As technology and the promulgation of human knowledge improves, it will become easier and easier to make AIs, and the risk of creating a UFAI that the FAI can’t defeat goes up.
It will be easier to have people who can’t make AIs than to try to control the tech and knowledge comprehensively enough to make sure there are no additional FOOMs.
I considered limiting initiative (imposing akrasia) rather than intelligence, but I think that would impact a wider range of human values.
That’s funny—I don’t consider the FAI thing even remotely “offensive” (perhaps “debatable”, in the sense of “I’m not sure how likely it is—do you have any evidence?” but not “offensive”). I wrote a short story in which the FAI kept human beings humanly-intelligent (though not explained in the story, in my background, it did bring humans to a fairly high minimum, but it did not change the intelligence level overall).
I don’t have evidence. I’m just generalizing from one example that folks at LW are very fond of being intelligent, would probably like to be more intelligent, and would resent being knocked down to 120 IQ or whatever it would take to make creating another AI impossible.
If an AI can be built, it has been shown that humanity is an AI-creating species. As technology and the promulgation of human knowledge improves, it will become easier and easier to make AIs, and the risk of creating a UFAI that the FAI can’t defeat goes up.
I would think that something as much more intelligent than humans as the FAI would be able to prevent humans from creating an UFAI that could defeat it without limiting their intelligence.
Apologies for not being at all indignant, but can we generalize to say you have suggested that it could be good to limit something good because that it is a sufficient solution to a specific problem?
I’d appreciate it if someone can show how endorsements of locally bad, merely sufficient solutions are (or aren’t) all implicitly arguments from ignorance, confessing to not knowing how to achieve the same results without the negative local consequences..
In other words, sure, limiting locally good thing X could be good on balance if doing so has generally positive consequences (like assassinating Hitler on certain dates), but that really depends on there not being something better on balance in general, of which an interesting case is a solution that has the same positive consequences but fewer negative ones.
Nitpick: It took me a bit to realize you meant “going back to being among non-rationalists” rather than “going back to the meeting”.
Unfortunately I recognize that as the bitter truth, so it’s of no use for me for training purposes.
Here’s something which might work as an indignation test—could it be a good move for an FAI to set a limit on human intelligence?
If an AI can be built, it has been shown that humanity is an AI-creating species. As technology and the promulgation of human knowledge improves, it will become easier and easier to make AIs, and the risk of creating a UFAI that the FAI can’t defeat goes up.
It will be easier to have people who can’t make AIs than to try to control the tech and knowledge comprehensively enough to make sure there are no additional FOOMs.
I considered limiting initiative (imposing akrasia) rather than intelligence, but I think that would impact a wider range of human values.
Same here. I suggest Eliezer edit it to make the intent more clear at first reading.
That’s funny—I don’t consider the FAI thing even remotely “offensive” (perhaps “debatable”, in the sense of “I’m not sure how likely it is—do you have any evidence?” but not “offensive”). I wrote a short story in which the FAI kept human beings humanly-intelligent (though not explained in the story, in my background, it did bring humans to a fairly high minimum, but it did not change the intelligence level overall).
I don’t have evidence. I’m just generalizing from one example that folks at LW are very fond of being intelligent, would probably like to be more intelligent, and would resent being knocked down to 120 IQ or whatever it would take to make creating another AI impossible.
I would think that something as much more intelligent than humans as the FAI would be able to prevent humans from creating an UFAI that could defeat it without limiting their intelligence.
Apologies for not being at all indignant, but can we generalize to say you have suggested that it could be good to limit something good because that it is a sufficient solution to a specific problem?
I’d appreciate it if someone can show how endorsements of locally bad, merely sufficient solutions are (or aren’t) all implicitly arguments from ignorance, confessing to not knowing how to achieve the same results without the negative local consequences..
In other words, sure, limiting locally good thing X could be good on balance if doing so has generally positive consequences (like assassinating Hitler on certain dates), but that really depends on there not being something better on balance in general, of which an interesting case is a solution that has the same positive consequences but fewer negative ones.