“How dire [do] the real world consequences have to be before it’s worthwhile debating dishonestly”?
My brief answer is:
One lower bound is:
If the amount that rationality affects humanity and the universe is decreasing over the long term.
(Note that if humanity is destroyed, the amount that rationality affects the universe probably decreases).
This is also my answer to the question "what is winning for the rationalist community"?
Rationality is winning if, over the long term, rationality increasingly affects humanity and the universe.
“I wont let the world be destroyed because then rationality can’t influence the future” is an attempt to avoid weighing your love of rationality against anything else.
Think about it. Is it really that rationality isn’t in control any more that bugs you, not everyone dying, or the astronomical number of worthwhile lives that will never be lived?
If humanity dies to a paperclip maximizer, which goes on to spread copies of itself through the universe to oversee paperclip production, each of those copies being rational beyond what any human can achieve, is that okay with you?
Thank you, I initially wrote my function with the idea of making it one (of many) “lower bound”(s) of how bad things could possibly get before debating dishonestly becomes necessary. Later, I mistakenly thought that “this works fine as a general theory, not just a lower bound”.
“How dire [do] the real world consequences have to be before it’s worthwhile debating dishonestly”?
My brief answer is:One lower bound is:
If the amount that rationality affects humanity and the universe is decreasing over the long term. (Note that if humanity is destroyed, the amount that rationality affects the universe probably decreases).
This is also my answer to the question"what is winning for the rationalist community"?Rationality is winning if, over the long term, rationality increasingly affects humanity and the universe.Downvoted for the fake utility function.
“I wont let the world be destroyed because then rationality can’t influence the future” is an attempt to avoid weighing your love of rationality against anything else.
Think about it. Is it really that rationality isn’t in control any more that bugs you, not everyone dying, or the astronomical number of worthwhile lives that will never be lived?
If humanity dies to a paperclip maximizer, which goes on to spread copies of itself through the universe to oversee paperclip production, each of those copies being rational beyond what any human can achieve, is that okay with you?
Thank you, I initially wrote my function with the idea of making it one (of many) “lower bound”(s) of how bad things could possibly get before debating dishonestly becomes necessary. Later, I mistakenly thought that “this works fine as a general theory, not just a lower bound”.
Thank you for helping me think more clearly.