Maybe I am missing something, but hasn’t a seed AI already been planted? Intelligence (whether that means ability to achieve goals in general, or whether it means able to do what humans can do) depends on both knowledge and computing power. Currently the largest collection of knowledge and computing power on the planet is the internet. By the internet, I mean both the billions of computers connected to it, and the two billion brains of its human users. Both knowledge and computing power are growing exponentially, doubling every 1 to 2 years, in part by adding users, but mostly on the silicon side by collecting human knowledge and the hardware to sense, store, index, and interpret it.
My question: where is the internet’s reward button? Where is its goal of “make humans happy”, or whatever it is, coded? How is it useful to describe the internet as a self-improving goal-directed optimization process?
I realize that it is useful, although not entirely accurate, to describe the human brain as a goal directed optimization process. Humans have certain evolved goals, such as food, and secondary goals such as money. Humans who are better at achieving these goals are assumed to be more intelligent. The model is not entirely accurate because humans are not completely rational. We don’t directly seek positive reinforcement. Rather, positive reinforcement is a signal that has the effect of increasing the probability of performing actions that immediately preceded it, for example, shooting heroin into a vein. Thus, unlike a rational agent, your desire to use heroin (or wirehead) depends on how many times you have tried it in the past.
We like the utility model because it is mathematically simple. But it also leads to a proof that ideal rational agents cannot exist (AIXI). Sometimes a utility model is still a useful approximation, and sometimes not. Is it useful to model a thermostat as an agent that “wants” to keep the room at a constant temperature? Is it useful to model practical AI this way?
I think the internet has the potential to grow into something you might not wish for, for example, something that will marginalize human brains as an insignificant component. But what are the real risks here? Is it really a problem of misinterpreting or taking over its goals.
Maybe I am missing something, but hasn’t a seed AI already been planted? Intelligence (whether that means ability to achieve goals in general, or whether it means able to do what humans can do) depends on both knowledge and computing power. Currently the largest collection of knowledge and computing power on the planet is the internet. By the internet, I mean both the billions of computers connected to it, and the two billion brains of its human users. Both knowledge and computing power are growing exponentially, doubling every 1 to 2 years, in part by adding users, but mostly on the silicon side by collecting human knowledge and the hardware to sense, store, index, and interpret it.
My question: where is the internet’s reward button? Where is its goal of “make humans happy”, or whatever it is, coded? How is it useful to describe the internet as a self-improving goal-directed optimization process?
I realize that it is useful, although not entirely accurate, to describe the human brain as a goal directed optimization process. Humans have certain evolved goals, such as food, and secondary goals such as money. Humans who are better at achieving these goals are assumed to be more intelligent. The model is not entirely accurate because humans are not completely rational. We don’t directly seek positive reinforcement. Rather, positive reinforcement is a signal that has the effect of increasing the probability of performing actions that immediately preceded it, for example, shooting heroin into a vein. Thus, unlike a rational agent, your desire to use heroin (or wirehead) depends on how many times you have tried it in the past.
We like the utility model because it is mathematically simple. But it also leads to a proof that ideal rational agents cannot exist (AIXI). Sometimes a utility model is still a useful approximation, and sometimes not. Is it useful to model a thermostat as an agent that “wants” to keep the room at a constant temperature? Is it useful to model practical AI this way?
I think the internet has the potential to grow into something you might not wish for, for example, something that will marginalize human brains as an insignificant component. But what are the real risks here? Is it really a problem of misinterpreting or taking over its goals.