Nick, truly fascinating read. Thank you. Although I have not read Bostrom’s paper prior to today I am glad to find that we come to largely identical conclusions. My core claim ‘What is good is what increases fitness’ does not mean that I argue for the replacement of humanity with non eudaemonic fitness maximizing agents as Bostrom calls them.
There are two paths to maximizing an individual’s fitness:
A) Change an indiidual’s genetic/memetic makeup to increase it’s fitness in a given environment
B) Change an individual’s environment to increase it’s genetic/memetic fitness
In my AI friendliness theory I argue for option B) using a friendly AGI in which in essence represents Bostrom’s singleton.
Nick, truly fascinating read. Thank you. Although I have not read Bostrom’s paper prior to today I am glad to find that we come to largely identical conclusions. My core claim ‘What is good is what increases fitness’ does not mean that I argue for the replacement of humanity with non eudaemonic fitness maximizing agents as Bostrom calls them.
There are two paths to maximizing an individual’s fitness:
A) Change an indiidual’s genetic/memetic makeup to increase it’s fitness in a given environment B) Change an individual’s environment to increase it’s genetic/memetic fitness
In my AI friendliness theory I argue for option B) using a friendly AGI in which in essence represents Bostrom’s singleton.