What do you think of when you say an AGI? To me, it is a general intelligence of some form, able to specialize in tasks as it determines fit.
Humans are a general intelligence organism, and we’re constrained by biological needs (for ex: sleeping, eating) because we arrived here via the evolution algorithm. A general intelligence on silicon is a million times faster than us and it is an instrumental goal to be smarter as it will be able to do things and arrive at conclusions with lesser data and evidence.
Thus, a GI specializing in removing its own bottlenecks and not being constrained as much as us and being faster than us in processing and sequential tasks and parallel tasks, and so on, would be far superior in planning. Even if it starts out stupider than us, it probably would not take long for that to change.
Yes, I don’t disagree with anything of what you said. Do you think that a machine playing at God level could beat Alpha zero at Go giving it a 20 stones handicap?
It doesn’t have to—Specialized deployments will lead to better performance. You can create custom processors for specific tasks, and create custom software optimized for that particular task. That’s different from having the flexibility of generalizing. A deep neural network might be trained on chess but it can’t suddenly start performing well on image classification without losing significant ability and performance.
Sorry, I think it is not clear what I meant. What I want to say is that a godlike machine might have important limitations we are not aware, especially when dealing with systems as complex, chaotic and unpredictable as the external world. If someone said to me, the machine will win the game no matter what, I would say that there are games so hard that cannot be really won, and if the risk of attacking is being attacked yourself, a machine might decide not to. EY premise is based on a machine that is almighty, I am denying this possibility.
What do you think of when you say an AGI? To me, it is a general intelligence of some form, able to specialize in tasks as it determines fit.
Humans are a general intelligence organism, and we’re constrained by biological needs (for ex: sleeping, eating) because we arrived here via the evolution algorithm. A general intelligence on silicon is a million times faster than us and it is an instrumental goal to be smarter as it will be able to do things and arrive at conclusions with lesser data and evidence.
Thus, a GI specializing in removing its own bottlenecks and not being constrained as much as us and being faster than us in processing and sequential tasks and parallel tasks, and so on, would be far superior in planning. Even if it starts out stupider than us, it probably would not take long for that to change.
Yes, I don’t disagree with anything of what you said. Do you think that a machine playing at God level could beat Alpha zero at Go giving it a 20 stones handicap?
It doesn’t have to—Specialized deployments will lead to better performance. You can create custom processors for specific tasks, and create custom software optimized for that particular task. That’s different from having the flexibility of generalizing. A deep neural network might be trained on chess but it can’t suddenly start performing well on image classification without losing significant ability and performance.
Sorry, I think it is not clear what I meant. What I want to say is that a godlike machine might have important limitations we are not aware, especially when dealing with systems as complex, chaotic and unpredictable as the external world. If someone said to me, the machine will win the game no matter what, I would say that there are games so hard that cannot be really won, and if the risk of attacking is being attacked yourself, a machine might decide not to. EY premise is based on a machine that is almighty, I am denying this possibility.