Regarding instrumental rationality: I’ve been wondering for a while now if “world domination” (or “world optimization”, as HJPEV prefers) is feasible. I haven’t entirely figured out my values yet, but whatever they turn out to be, WD/WO sure would be handy for achieving them. But even if WD/WO is a ridiculously far-fetched dream, it would still be a very good idea to know one’s approximate chances of success with various possible paths to achieving one’s values. I have therefore come up with the “feasibility problem.” Basically, a solution to the problem consists of an estimation of how much one can actually hope to influence the world, and to what extent one can actually fulfill one’s values. I think it would be very wise to solve the feasibility problem before attempting to take over the world, or become the President, or lead a social revolution, or improve the rationality of the general populace, etc.
Solving the FP would seem to require a deep understanding of how the world operates (anthropomorphically speaking, if you get my drift; I’m talking about the hoomun world, not physics and chemistry).
I’ve even constructed a GPOATCBUBAAAA (general plan of action that can be used by any and all agents): first, define your utility function, and also learn how the world works (easier said than done). Once you’ve completed that, you can apply your knowledge to solve the FP, and then you can construct a plan to fulfill your utility function, and then put it into action.
This is probably a bit longer than 100 words, but I’m posting it here and not in the open thread because I have no idea if it’s of any value whatsoever.
Regarding instrumental rationality: I’ve been wondering for a while now if “world domination” (or “world optimization”, as HJPEV prefers) is feasible. I haven’t entirely figured out my values yet, but whatever they turn out to be, WD/WO sure would be handy for achieving them. But even if WD/WO is a ridiculously far-fetched dream, it would still be a very good idea to know one’s approximate chances of success with various possible paths to achieving one’s values. I have therefore come up with the “feasibility problem.” Basically, a solution to the problem consists of an estimation of how much one can actually hope to influence the world, and to what extent one can actually fulfill one’s values. I think it would be very wise to solve the feasibility problem before attempting to take over the world, or become the President, or lead a social revolution, or improve the rationality of the general populace, etc.
Solving the FP would seem to require a deep understanding of how the world operates (anthropomorphically speaking, if you get my drift; I’m talking about the hoomun world, not physics and chemistry).
I’ve even constructed a GPOATCBUBAAAA (general plan of action that can be used by any and all agents): first, define your utility function, and also learn how the world works (easier said than done). Once you’ve completed that, you can apply your knowledge to solve the FP, and then you can construct a plan to fulfill your utility function, and then put it into action.
This is probably a bit longer than 100 words, but I’m posting it here and not in the open thread because I have no idea if it’s of any value whatsoever.
Am I reading this right as, basically, crack the alignment problem manually, and then finish science (then proceed to take over the world)?
can you do me a favour and separate this into paragraphs, (or fix the formatting).
Thanks.
The lesswrong slack has a channel called #world_domination.
Fixed the formatting.