The official introductory SI pages may have to sugarcoat such issues due to PR considerations (“everyone get rich, then donate your riches” sends off a bad vibe).
As you surmised, your idea has been brought up quite often in various contexts, especially in optimal charity discussions. For many/most endeavors, the globally optimal starting steps are “acquire more capabilities / become more powerful” (players of strategy games may be more explicitly cognizant of that stratagem).
I also do remember speculation that friendly AI and unfriendly AI may act very similarly at first—both choosing the optimal path to powering up, so that they can pursue the differing goals of their respective utility functions more efficiently at a future point in time. So your thoughts on the matter seem compatible with the local belief cluster.
Your money proverb seems to still hold true, anecdotally I’m acquainted with some CS people making copious amounts of money on NASDAQ doing simple ANOVA analyses, while barely being able to spell the companies’ names. So why aren’t we doing that? Maybe a combination of mental inertia and being locked into a research/get endorsements modus operandi, which may be hard to shift out of into a more active “let’s create start-ups”/”let’s do day-trading” mode.
A goal-function of “seek influential person X’s approval” will lead to a different mind set from “let quantifiable results speak for themselves”, the latter will allow you not to optimize every step of the way for signalling purposes.
The official introductory SI pages may have to sugarcoat such issues due to PR considerations (“everyone get rich, then donate your riches” sends off a bad vibe).
As you surmised, your idea has been brought up quite often in various contexts, especially in optimal charity discussions. For many/most endeavors, the globally optimal starting steps are “acquire more capabilities / become more powerful” (players of strategy games may be more explicitly cognizant of that stratagem).
I also do remember speculation that friendly AI and unfriendly AI may act very similarly at first—both choosing the optimal path to powering up, so that they can pursue the differing goals of their respective utility functions more efficiently at a future point in time. So your thoughts on the matter seem compatible with the local belief cluster.
Your money proverb seems to still hold true, anecdotally I’m acquainted with some CS people making copious amounts of money on NASDAQ doing simple ANOVA analyses, while barely being able to spell the companies’ names. So why aren’t we doing that? Maybe a combination of mental inertia and being locked into a research/get endorsements modus operandi, which may be hard to shift out of into a more active “let’s create start-ups”/”let’s do day-trading” mode.
A goal-function of “seek influential person X’s approval” will lead to a different mind set from “let quantifiable results speak for themselves”, the latter will allow you not to optimize every step of the way for signalling purposes.