Building a scaled-down version is perfectly safe, provided you limit its compute (which happens by default) and don’t put it in the path of any important systems (which also happens by default).
There are a bunch of ways that could go wrong. The most obvious would be somebody else seeing what you were doing and scaling it up, followed by it hacking its way out and putting itself in whatever systems it pleased. But there are others, especially since if it does work, you are going to want to scale it up, which you may or may not be able to do while keeping it from going amok, and, only loosely correlated with reality, may or may not be able to convince yourself you can do.
And, depending on the architecture, you can’t necessarily predict the scale effects, so a scaled-down version may not tell you much.
If you can’t build the AGI yourself then you’re not at the top of the ML field which means your architecture is unlikely to be correct and you can publish it safely.
The only reason to publish it would be if OP thought it was correct, or at least that there was a meaningful chance. If you have an idea, but you’re convinced that the probability of it being correct is that low, then it seems to me that you should just drop it silently and not waste your and other people’s time on it.
Also, I’ll bet you 5 Postsingular Currency Units that pure machine learning, at least as presently defined by the people at the “top of the ML field”, will not generate a truly generally intelligent artificial agent, at or above human level, that can run on any real hardware. At least not before other AGIs, not entirely based on ML, have built Jupiter brains or something for it to run on. I think there’s gonna be other stuff in the mix.
There are a bunch of ways that could go wrong. The most obvious would be somebody else seeing what you were doing and scaling it up, followed by it hacking its way out and putting itself in whatever systems it pleased. But there are others, especially since if it does work, you are going to want to scale it up, which you may or may not be able to do while keeping it from going amok, and, only loosely correlated with reality, may or may not be able to convince yourself you can do.
And, depending on the architecture, you can’t necessarily predict the scale effects, so a scaled-down version may not tell you much.
The only reason to publish it would be if OP thought it was correct, or at least that there was a meaningful chance. If you have an idea, but you’re convinced that the probability of it being correct is that low, then it seems to me that you should just drop it silently and not waste your and other people’s time on it.
Also, I’ll bet you 5 Postsingular Currency Units that pure machine learning, at least as presently defined by the people at the “top of the ML field”, will not generate a truly generally intelligent artificial agent, at or above human level, that can run on any real hardware. At least not before other AGIs, not entirely based on ML, have built Jupiter brains or something for it to run on. I think there’s gonna be other stuff in the mix.