Eli, I’ve been busy fighting with models of cognitive bias in finance and only just now found time to reply:
Suppose that I show you the sentence “This sentence is false.” Do you convert it to ASCII, add up the numbers, factorize the result, and check if there are two square factors? No; it would be easy enough for you to do so, but why bother? The concept “sentences whose ASCII conversion of their English serialization sums to a number with two square factors” is not, to you, an interesting way to carve up reality.
Sure, this property of adding up the ASCII, factorising and checking for square factors appears to have no value and thus I can’t see why a super intelligent machine would spend time on this. Indeed, to the best of my recollection, nobody has ever suggested this property to be before.
But is morality like this? No it isn’t. Everyday in social interaction morals are either expressed or implied. If I turn on the TV and watch a soap I see people facing ethical decisions. If I switch channel to politics I hear people telling me all about what they think is or is not ethical, what their values are, etc. I would say that a large proportion of debate in the media has an ethical element to is. My phone rings and it’s my friend on the line who’s recently broken up with his girlfriend and he wants to talk to me about it. At various points our discussion either explicitly or implicitly touches on moral questions. Although ethics is complex, sometimes vague, and not always consistent, like it or not we are swimming in this sea of morals every day. If you want to understand why people do what they do and why they interact with each other as they do, and how they will react to some new situation, one thing you must have is an understanding of their morals, ethics and values. Without this, a lot of human behaviour is inexplicable.
Thus, whether a super intelligent machine seeks to deliver us into paradise or eliminate us from the planet as if we’re some kind of a disease, if it can understand our behaviours and motives then it can more efficiently achieve its goals.
It’s up to this point that I’m arguing for: that human morality (as it currently is) is a natural category for the machine given that its environment will be full of humans. Your reply above then goes on to how such a morality could be extending into the future in a consistent way and all that. I accept the importance of this, but these issues lie beyond are the point I was trying to make.
Ok, now, will a super intelligent machine consider how human morality could be extended into the future and all that? I think it probably will, though my reasons for thinking this are more intuitive at the moment. I suspect that one of the things that a super intelligent machine will do is to look through all the existing literature on super intelligent machines. It will read my thesis, read this blog, read all the comments posted here, read science fiction stories, and so on. It will then dissect all this information in order to understand our attitudes to intelligent machines, identify all the errors in our arguments, extent the theory and fill in all the bits we couldn’t figure out to see where we were heading… perhaps all in some tiny fraction of a second. All this might help it better understand itself, or maybe more importantly, how we view it and why.
Eli, I’ve been busy fighting with models of cognitive bias in finance and only just now found time to reply:
Suppose that I show you the sentence “This sentence is false.” Do you convert it to ASCII, add up the numbers, factorize the result, and check if there are two square factors? No; it would be easy enough for you to do so, but why bother? The concept “sentences whose ASCII conversion of their English serialization sums to a number with two square factors” is not, to you, an interesting way to carve up reality.
Sure, this property of adding up the ASCII, factorising and checking for square factors appears to have no value and thus I can’t see why a super intelligent machine would spend time on this. Indeed, to the best of my recollection, nobody has ever suggested this property to be before.
But is morality like this? No it isn’t. Everyday in social interaction morals are either expressed or implied. If I turn on the TV and watch a soap I see people facing ethical decisions. If I switch channel to politics I hear people telling me all about what they think is or is not ethical, what their values are, etc. I would say that a large proportion of debate in the media has an ethical element to is. My phone rings and it’s my friend on the line who’s recently broken up with his girlfriend and he wants to talk to me about it. At various points our discussion either explicitly or implicitly touches on moral questions. Although ethics is complex, sometimes vague, and not always consistent, like it or not we are swimming in this sea of morals every day. If you want to understand why people do what they do and why they interact with each other as they do, and how they will react to some new situation, one thing you must have is an understanding of their morals, ethics and values. Without this, a lot of human behaviour is inexplicable.
Thus, whether a super intelligent machine seeks to deliver us into paradise or eliminate us from the planet as if we’re some kind of a disease, if it can understand our behaviours and motives then it can more efficiently achieve its goals.
It’s up to this point that I’m arguing for: that human morality (as it currently is) is a natural category for the machine given that its environment will be full of humans. Your reply above then goes on to how such a morality could be extending into the future in a consistent way and all that. I accept the importance of this, but these issues lie beyond are the point I was trying to make.
Ok, now, will a super intelligent machine consider how human morality could be extended into the future and all that? I think it probably will, though my reasons for thinking this are more intuitive at the moment. I suspect that one of the things that a super intelligent machine will do is to look through all the existing literature on super intelligent machines. It will read my thesis, read this blog, read all the comments posted here, read science fiction stories, and so on. It will then dissect all this information in order to understand our attitudes to intelligent machines, identify all the errors in our arguments, extent the theory and fill in all the bits we couldn’t figure out to see where we were heading… perhaps all in some tiny fraction of a second. All this might help it better understand itself, or maybe more importantly, how we view it and why.
waves at super intelligent machine
:-)