I started with “37 Ways words can be Wrong” and didn’t find much immediate benefit from the exercise in the way of being less naive about Computationalism.
I scroll down your list and see there’s a lot about language. Is such an extensive education in language quite necessary? (Or do I need to keep reading to see?)
Most of those readings will tell you nothing about computationalism directly, they will broaden your vision of the world in such a way as to eventually make your kind of reasoning converge into a better rationalist about issues related to computationalism.
The main reason I put the personal identity text there for instance, is to cause a transition from thinking frequently that something (like personal identity) will carry over to it’s closest continuator in a new slightly different scenario, to a more gradualist thinking, in which sometimes things may dissolve in any dimension you try to vary them. In a future in which some folks try to build FAI, this will be of extreme importance when considering the values dimension. For instance, will what we want to protect be preserved if we extrapolate human intelligence? This is my current line of work (any input welcome).
What we value as good and fun may increase in volume, because we can discover new spaces with increasing intelligence. Will what we want to protect be preserved if we extrapolate human intelligence? Yes, if this new intelligence is not some kind of mind-blind autistic savant 2.0 who clearly can’t preserve high levels of empathy and share the same “computational space”. If we are going to live as separate individuals, then cooperation demands some fine tuned emphatic algorithms, so we can share our values with other and respect the qualitative space of others. For example, I may not enjoy dancing and having a homossexual relationship (I’m not a homophobe), but I’m able to extrapolate it from my own values and be motivated to respect its preservation as if it were mine (How? Simulating it. As a highly empathic person, I can say that it hurts to make others miserable. So it works as an intrinsic motivation and goal)
What we value as good and fun may increase in volume, because we can discover new spaces with increasing intelligence. Will what we want to protect be preserved if we extrapolate human intelligence? Yes, if this new intelligence is not some kind of mind-blind autistic savant 2.0 who clearly can’t preserve high levels of empathy and share the same “computational space”. If we are going to live as separate individuals, then cooperation demands some fine tuned emphatic algorithms, so we can share our values with other and respect the qualitative space of others. For example, I may not enjoy dancing and having a homossexual relationship (I’m not a homophobe), but I’m able to extrapolate it from my own values and be motivated to respect it’s preservation and if it were mine (How? Dunno, but as a highly empathic person, I can say that it hurts to make others miserable).
I started with “37 Ways words can be Wrong” and didn’t find much immediate benefit from the exercise in the way of being less naive about Computationalism.
I scroll down your list and see there’s a lot about language. Is such an extensive education in language quite necessary? (Or do I need to keep reading to see?)
Most of those readings will tell you nothing about computationalism directly, they will broaden your vision of the world in such a way as to eventually make your kind of reasoning converge into a better rationalist about issues related to computationalism.
The main reason I put the personal identity text there for instance, is to cause a transition from thinking frequently that something (like personal identity) will carry over to it’s closest continuator in a new slightly different scenario, to a more gradualist thinking, in which sometimes things may dissolve in any dimension you try to vary them. In a future in which some folks try to build FAI, this will be of extreme importance when considering the values dimension. For instance, will what we want to protect be preserved if we extrapolate human intelligence? This is my current line of work (any input welcome).
Does this mean you’re thinking about uploaded people here? I think that is an important research question.
I was thinking about CEV, but yes, the same question applies to uploads (and is not the classic upload issue).
Good that you find it important. I’m going to dedicate some time to that research.
Does anyone have good reasons to say it is not a good research avenue?
What we value as good and fun may increase in volume, because we can discover new spaces with increasing intelligence. Will what we want to protect be preserved if we extrapolate human intelligence? Yes, if this new intelligence is not some kind of mind-blind autistic savant 2.0 who clearly can’t preserve high levels of empathy and share the same “computational space”. If we are going to live as separate individuals, then cooperation demands some fine tuned emphatic algorithms, so we can share our values with other and respect the qualitative space of others. For example, I may not enjoy dancing and having a homossexual relationship (I’m not a homophobe), but I’m able to extrapolate it from my own values and be motivated to respect its preservation as if it were mine (How? Simulating it. As a highly empathic person, I can say that it hurts to make others miserable. So it works as an intrinsic motivation and goal)
What we value as good and fun may increase in volume, because we can discover new spaces with increasing intelligence. Will what we want to protect be preserved if we extrapolate human intelligence? Yes, if this new intelligence is not some kind of mind-blind autistic savant 2.0 who clearly can’t preserve high levels of empathy and share the same “computational space”. If we are going to live as separate individuals, then cooperation demands some fine tuned emphatic algorithms, so we can share our values with other and respect the qualitative space of others. For example, I may not enjoy dancing and having a homossexual relationship (I’m not a homophobe), but I’m able to extrapolate it from my own values and be motivated to respect it’s preservation and if it were mine (How? Dunno, but as a highly empathic person, I can say that it hurts to make others miserable).