Please don’t take this as a personal attack, but, historically speaking, every one who’d said “I am in the final implementation stages of the general intelligence algorithm” was wrong so far. Their algorithms never quite worked out. Is there any evidence you can offer that your work is any different ? I understand that this is a tricky proposition, since revealing your work could set off all kinds of doomsday scenarios (assuming that it performs as you expect it to); still, surely there must be some way for you to convince skeptics that you can succeed where so many others had failed.
Sadly, I think the general trend you note is correct, but the first developers to succeed may do so in relative secrecy.
As time goes on it becomes increasingly possible that some small group or lone researcher is able to put the final pieces together and develop an AGI. Assuming a typical largely selfish financial motivation, a small self-sufficient developer would have very little to gain from pre-publishing or publicizing their plan.
Eventually of course they may be tempted to publicize, but there is more incentive to do that later, if at all. Unless you work on it for a while and it doesn’t go much of anywhere. Then of course you publish.
As time goes on it becomes increasingly possible that some small group or lone researcher is able to put the final pieces together and develop an AGI.
Why do you think this is the case? Is this just because the overall knowledge level concerning AI goes up over time? If so, what makes you think that that rate of increase is anything large enough to be significant?
Yes. This is just the way of invention in general: steady incremental evolutionary progress.
A big well funded team can throw more computational resources into their particular solution for the problem, but the returns are sublinear (for any one particular solution) even without moore’s law.
Please don’t take this as a personal attack, but, historically speaking, every one who’d said “I am in the final implementation stages of the general intelligence algorithm” was wrong so far. Their algorithms never quite worked out. Is there any evidence you can offer that your work is any different ? I understand that this is a tricky proposition, since revealing your work could set off all kinds of doomsday scenarios (assuming that it performs as you expect it to); still, surely there must be some way for you to convince skeptics that you can succeed where so many others had failed.
Sadly, I think the general trend you note is correct, but the first developers to succeed may do so in relative secrecy.
As time goes on it becomes increasingly possible that some small group or lone researcher is able to put the final pieces together and develop an AGI. Assuming a typical largely selfish financial motivation, a small self-sufficient developer would have very little to gain from pre-publishing or publicizing their plan.
Eventually of course they may be tempted to publicize, but there is more incentive to do that later, if at all. Unless you work on it for a while and it doesn’t go much of anywhere. Then of course you publish.
Why do you think this is the case? Is this just because the overall knowledge level concerning AI goes up over time? If so, what makes you think that that rate of increase is anything large enough to be significant?
Yes. This is just the way of invention in general: steady incremental evolutionary progress.
A big well funded team can throw more computational resources into their particular solution for the problem, but the returns are sublinear (for any one particular solution) even without moore’s law.