Please explain the term “meta-preferences”, if here it doesn’t means the same as put by Sir James Buchanan in his 1985 work titled “The reason of rules” for the term “meta-preferences” to be ‘a preference for preferences’.
It is ‘a preference for preferences’; eg “my long term needs take precedence over my short term desires” is a meta-preference (in fact the use of terms ‘needs’ vs ‘desires’ is itself a meta-preference, as at the lowest formal level, both are just preferences).
How can the short term preference be classified as “live forever” and the long term preference as “die after a century”? It can also be put through your argument then, that “die after a century” would take precedence over “live forever”.
Do the arguments imply that the AI will have an RLong function and a PKurtz function for preference-shaping (holding that it will have multiple opportunities)?
I was unable to gather the context in which you put your questions—“What should we do? And what principles should we use to do so?”, lacking the light to gather, ‘what is it that we have to “do”?’.
How can the short term preference be classified as “live forever” and the long term preference as “die after a century”?
Because “live forever” is the inductive consequence of the short-term “live till tomorrow” preference applied to every day.
Do the arguments imply that the AI will have an RLong function and a PKurtz function for preference-shaping
No. It implies that the human can be successfully modelled as having a mix of RLong and RKurtz preferences, conditional on which philosopher they meet first. And the AI is trying to best implement human preferences, yet humans have these odd mixed preferences.
What we (the AI) have to “do”, is decide which philosopher the human meets first, and hence what their future preferences will be.
(i) Because “live forever” is the inductive consequence of the short-term “live till tomorrow” preference applied to every day.
Then, “die after a century” is the inductive consequence of the long-term ”?” preference applied to ”?”.
(ii) No. It implies that the human can be successfully modelled as having a mix of RLong and RKurtz preferences, conditional on which philosopher they meet first. And the AI is trying to best implement human preferences, yet humans have these odd mixed preferences.
What we (the AI) have to “do”, is decide which philosopher the human meets first, and hence what their future preferences will be.
I still am unable to sort out the relation between the “human”, the “AI”/”the AI” and the “philosophers”. I am relating it as, there is some human “H” with some name who will meet the philosophers “RLong” and “PKurtz” who will model the preferences of “H” into “RLong” and “RKurtz”, conditional on whether they meet Mr./Ms. “RLong” first or Mr./Ms. “PKurtz” first. Am I right in understanding this much?
Apart from this, what/who/where is “(the AI)”? If we are not referring to our respective understandings of “the AI”.
Moreover, regarding “we” i.e. “ourselves” as “the AI” i.e. “our respective understandings of the AI theory”, the human “H” should meet Mr./Ms. “PKurtz” first because it will prove to be comparatively more beneficial in my understanding, where my understanding suggests an outcome “O” to be measured in terms of efficient utilization of time, if the human “H” were me or even not me, as it will save time.
To achieve anything in “long-term” needs first an understanding of the “short-term”.
Please explain the term “meta-preferences”, if here it doesn’t means the same as put by Sir James Buchanan in his 1985 work titled “The reason of rules” for the term “meta-preferences” to be ‘a preference for preferences’.
It is ‘a preference for preferences’; eg “my long term needs take precedence over my short term desires” is a meta-preference (in fact the use of terms ‘needs’ vs ‘desires’ is itself a meta-preference, as at the lowest formal level, both are just preferences).
How can the short term preference be classified as “live forever” and the long term preference as “die after a century”? It can also be put through your argument then, that “die after a century” would take precedence over “live forever”.
Do the arguments imply that the AI will have an RLong function and a PKurtz function for preference-shaping (holding that it will have multiple opportunities)?
I was unable to gather the context in which you put your questions—“What should we do? And what principles should we use to do so?”, lacking the light to gather, ‘what is it that we have to “do”?’.
Because “live forever” is the inductive consequence of the short-term “live till tomorrow” preference applied to every day.
No. It implies that the human can be successfully modelled as having a mix of RLong and RKurtz preferences, conditional on which philosopher they meet first. And the AI is trying to best implement human preferences, yet humans have these odd mixed preferences.
What we (the AI) have to “do”, is decide which philosopher the human meets first, and hence what their future preferences will be.
(i) Because “live forever” is the inductive consequence of the short-term “live till tomorrow” preference applied to every day.
Then, “die after a century” is the inductive consequence of the long-term ”?” preference applied to ”?”.
(ii) No. It implies that the human can be successfully modelled as having a mix of RLong and RKurtz preferences, conditional on which philosopher they meet first. And the AI is trying to best implement human preferences, yet humans have these odd mixed preferences. What we (the AI) have to “do”, is decide which philosopher the human meets first, and hence what their future preferences will be.
I still am unable to sort out the relation between the “human”, the “AI”/”the AI” and the “philosophers”. I am relating it as, there is some human “H” with some name who will meet the philosophers “RLong” and “PKurtz” who will model the preferences of “H” into “RLong” and “RKurtz”, conditional on whether they meet Mr./Ms. “RLong” first or Mr./Ms. “PKurtz” first. Am I right in understanding this much?
Apart from this, what/who/where is “(the AI)”? If we are not referring to our respective understandings of “the AI”.
Moreover, regarding “we” i.e. “ourselves” as “the AI” i.e. “our respective understandings of the AI theory”, the human “H” should meet Mr./Ms. “PKurtz” first because it will prove to be comparatively more beneficial in my understanding, where my understanding suggests an outcome “O” to be measured in terms of efficient utilization of time, if the human “H” were me or even not me, as it will save time.
To achieve anything in “long-term” needs first an understanding of the “short-term”.