With respect to discount rates:
I understand your argument(s) against the discount rate living in one’s pure preferences, but what is it you offer in its stead? No discount rate at all? Should one care the same about all time periods? Isn’t this a touch unfair for any single person who values internal discount rates? For global catastrophic risk management: should there be no discount rate applied for valuing and modeling purposes? Isn’t this the same as modeling a 0% discount rate?
With respect to AI (excuse my naivety):
It seems that if a current human created AI it would ultimately bias an AI being toward having some type of human traits or incentive mapping. Otherwise we are assuming that the “human-creators” have a knowledge base beyond the understanding of “non-creator-humans” where they could create an AI which had no ties to (or resemblance of) human wants, needs, incentives, values, etc. This seems rather implausible to me.
Without omniscient human-creators, I get the feeling that an AI would be inherently biased toward having human characteristics otherwise why wouldn’t the humans creating AI try to “create” themselves in the likeness of an “ideal” AI? Furthermore, in keeping with this theme, do you think humans and an AI would have the same incentive for time travel?
With respect to discount rates: I understand your argument(s) against the discount rate living in one’s pure preferences, but what is it you offer in its stead? No discount rate at all? Should one care the same about all time periods? Isn’t this a touch unfair for any single person who values internal discount rates? For global catastrophic risk management: should there be no discount rate applied for valuing and modeling purposes? Isn’t this the same as modeling a 0% discount rate?
With respect to AI (excuse my naivety): It seems that if a current human created AI it would ultimately bias an AI being toward having some type of human traits or incentive mapping. Otherwise we are assuming that the “human-creators” have a knowledge base beyond the understanding of “non-creator-humans” where they could create an AI which had no ties to (or resemblance of) human wants, needs, incentives, values, etc. This seems rather implausible to me.
Without omniscient human-creators, I get the feeling that an AI would be inherently biased toward having human characteristics otherwise why wouldn’t the humans creating AI try to “create” themselves in the likeness of an “ideal” AI? Furthermore, in keeping with this theme, do you think humans and an AI would have the same incentive for time travel?
Thank you for your time and consideration.