But I rarely see anyone touch on the idea of “what if we only make something as smart as us?”
But why would intelligence reach human level and then halt there? There’s no reason to think there’s some kind of barrier or upper limit at that exact point.
Even in the weird case where that were true, aren’t computers going to carry on getting faster? Just running a human level AI on a very powerful computer would be a way of creating a human scientist that can think at 1000x speed, create duplicates of itself, modify it’s own brain. That’s already a superintelligence isn’t it?
The assumption there is that the faste the hardware underneath, the faster the sentience running on it will be. But this isn’t supported by evidence. We haven’t produced a sentient AI to know whether this is true or not.
For all we know, there may be a upper limit to “thinking” based on neural propagation of information. To understand and integrate a concept requires change and that change may move slowly across the mind and underlying hardware.
Humans have sleep for example to help us learn and retain information.
As for self modification—we don’t have atomic level control over the meat we run on. A program or model doesn’t have atomic level control over its hardware. It can’t move an atom at will in its underlying circuitry to speed up processing for example. This level of control does not exist in nature in any way.
We don’t know so many things. For example, what if consciousness requires meat? That it is physically impossible on anything other than meat? We just assume it’s possible using metal and silica.
But why would intelligence reach human level and then halt there? There’s no reason to think there’s some kind of barrier or upper limit at that exact point.
Even in the weird case where that were true, aren’t computers going to carry on getting faster? Just running a human level AI on a very powerful computer would be a way of creating a human scientist that can think at 1000x speed, create duplicates of itself, modify it’s own brain. That’s already a superintelligence isn’t it?
The assumption there is that the faste the hardware underneath, the faster the sentience running on it will be. But this isn’t supported by evidence. We haven’t produced a sentient AI to know whether this is true or not.
For all we know, there may be a upper limit to “thinking” based on neural propagation of information. To understand and integrate a concept requires change and that change may move slowly across the mind and underlying hardware.
Humans have sleep for example to help us learn and retain information.
As for self modification—we don’t have atomic level control over the meat we run on. A program or model doesn’t have atomic level control over its hardware. It can’t move an atom at will in its underlying circuitry to speed up processing for example. This level of control does not exist in nature in any way.
We don’t know so many things. For example, what if consciousness requires meat? That it is physically impossible on anything other than meat? We just assume it’s possible using metal and silica.