Artificial sentience is a technological project very comparable to the Manhattan project, though. Prior to reaching critical mass it’s not doing anything at all. Once you reach critical mass—in this case, useful AI agents that are general purpose—you can use the first systems to let you build the one that explodes.
The thing is that it’s all or nothing. AI agents that don’t produce more value than their cost have negative gain. We have all sorts of crummy agents today of questionable utility. (agents that try to spot fraud or machinery failure or other difficult to solve regression problems). Once you get to positive gain, you need an AI system sophisticated enough to self improve from it’s own output. Before you hook in the last piece of such a system, nothing happens, and we don’t know exactly when this will start to work.
This is fundamentally difficult to model. If you plotted human caused fission events per year, you would see a line near zero, suddenly increasing in 1942 with the Chicago pile, then going vertical and needing a logarithmic scale with the 1945 Los Alamos test.
Progress had been made, thousands of little things, to reach this point, but it wasn’t really certain until the first blinding flash and mushroom cloud that this overall effort was really going to work. There could have been all kinds of hidden laws of nature that would have prevented a fission device from working. Similarly, there are plenty of people (often seemingly to protect their own sense of importance or well being) who believe some hidden law of nature will prevent an artificial sentience from really working.
I agree with the dangers of modeling progress in this way. I’m just curious how well we can build the model, and what it would predict. Fur a specific sort of person, these mathematical models are more convincing than detailed explanations of why the future might go specific ways. And it seems to me that there is some low hanging fruit around improving these sorts of models.
Artificial sentience is a technological project very comparable to the Manhattan project, though. Prior to reaching critical mass it’s not doing anything at all. Once you reach critical mass—in this case, useful AI agents that are general purpose—you can use the first systems to let you build the one that explodes.
The thing is that it’s all or nothing. AI agents that don’t produce more value than their cost have negative gain. We have all sorts of crummy agents today of questionable utility. (agents that try to spot fraud or machinery failure or other difficult to solve regression problems). Once you get to positive gain, you need an AI system sophisticated enough to self improve from it’s own output. Before you hook in the last piece of such a system, nothing happens, and we don’t know exactly when this will start to work.
This is fundamentally difficult to model. If you plotted human caused fission events per year, you would see a line near zero, suddenly increasing in 1942 with the Chicago pile, then going vertical and needing a logarithmic scale with the 1945 Los Alamos test.
Progress had been made, thousands of little things, to reach this point, but it wasn’t really certain until the first blinding flash and mushroom cloud that this overall effort was really going to work. There could have been all kinds of hidden laws of nature that would have prevented a fission device from working. Similarly, there are plenty of people (often seemingly to protect their own sense of importance or well being) who believe some hidden law of nature will prevent an artificial sentience from really working.
I agree with the dangers of modeling progress in this way. I’m just curious how well we can build the model, and what it would predict. Fur a specific sort of person, these mathematical models are more convincing than detailed explanations of why the future might go specific ways. And it seems to me that there is some low hanging fruit around improving these sorts of models.