How can the short term preference be classified as “live forever” and the long term preference as “die after a century”?
Because “live forever” is the inductive consequence of the short-term “live till tomorrow” preference applied to every day.
Do the arguments imply that the AI will have an RLong function and a PKurtz function for preference-shaping
No. It implies that the human can be successfully modelled as having a mix of RLong and RKurtz preferences, conditional on which philosopher they meet first. And the AI is trying to best implement human preferences, yet humans have these odd mixed preferences.
What we (the AI) have to “do”, is decide which philosopher the human meets first, and hence what their future preferences will be.
(i) Because “live forever” is the inductive consequence of the short-term “live till tomorrow” preference applied to every day.
Then, “die after a century” is the inductive consequence of the long-term ”?” preference applied to ”?”.
(ii) No. It implies that the human can be successfully modelled as having a mix of RLong and RKurtz preferences, conditional on which philosopher they meet first. And the AI is trying to best implement human preferences, yet humans have these odd mixed preferences.
What we (the AI) have to “do”, is decide which philosopher the human meets first, and hence what their future preferences will be.
I still am unable to sort out the relation between the “human”, the “AI”/”the AI” and the “philosophers”. I am relating it as, there is some human “H” with some name who will meet the philosophers “RLong” and “PKurtz” who will model the preferences of “H” into “RLong” and “RKurtz”, conditional on whether they meet Mr./Ms. “RLong” first or Mr./Ms. “PKurtz” first. Am I right in understanding this much?
Apart from this, what/who/where is “(the AI)”? If we are not referring to our respective understandings of “the AI”.
Moreover, regarding “we” i.e. “ourselves” as “the AI” i.e. “our respective understandings of the AI theory”, the human “H” should meet Mr./Ms. “PKurtz” first because it will prove to be comparatively more beneficial in my understanding, where my understanding suggests an outcome “O” to be measured in terms of efficient utilization of time, if the human “H” were me or even not me, as it will save time.
To achieve anything in “long-term” needs first an understanding of the “short-term”.
Because “live forever” is the inductive consequence of the short-term “live till tomorrow” preference applied to every day.
No. It implies that the human can be successfully modelled as having a mix of RLong and RKurtz preferences, conditional on which philosopher they meet first. And the AI is trying to best implement human preferences, yet humans have these odd mixed preferences.
What we (the AI) have to “do”, is decide which philosopher the human meets first, and hence what their future preferences will be.
(i) Because “live forever” is the inductive consequence of the short-term “live till tomorrow” preference applied to every day.
Then, “die after a century” is the inductive consequence of the long-term ”?” preference applied to ”?”.
(ii) No. It implies that the human can be successfully modelled as having a mix of RLong and RKurtz preferences, conditional on which philosopher they meet first. And the AI is trying to best implement human preferences, yet humans have these odd mixed preferences. What we (the AI) have to “do”, is decide which philosopher the human meets first, and hence what their future preferences will be.
I still am unable to sort out the relation between the “human”, the “AI”/”the AI” and the “philosophers”. I am relating it as, there is some human “H” with some name who will meet the philosophers “RLong” and “PKurtz” who will model the preferences of “H” into “RLong” and “RKurtz”, conditional on whether they meet Mr./Ms. “RLong” first or Mr./Ms. “PKurtz” first. Am I right in understanding this much?
Apart from this, what/who/where is “(the AI)”? If we are not referring to our respective understandings of “the AI”.
Moreover, regarding “we” i.e. “ourselves” as “the AI” i.e. “our respective understandings of the AI theory”, the human “H” should meet Mr./Ms. “PKurtz” first because it will prove to be comparatively more beneficial in my understanding, where my understanding suggests an outcome “O” to be measured in terms of efficient utilization of time, if the human “H” were me or even not me, as it will save time.
To achieve anything in “long-term” needs first an understanding of the “short-term”.