If so, then just build the AGI yourself. Experimentation trumps talk. Actually building the thing will teach you more about the architecture than discussing it with other people. Building a scaled-down version is perfectly safe, provided you limit its compute (which happens by default) and don’t put it in the path of any important systems (which also happens by default).
If you can’t build the AGI yourself then you’re not at the top of the ML field which means your architecture is unlikely to be correct and you can publish it safely.
I can’t say I am one, but I am currently working on research and prototyping and will probably refrain to that until I can prove some of my hypotheses, since I do have access to the tools I need at the moment. Still, I didn’t want this post to only have relevance to my case, as I stated I don’t think probability of successs is meaningful. But I am interested in the opinions of the community related to other similar cases. edit: It’s kinda hard to answer your comment since it keeps changing every time I refresh. By “can’t say I am one” I mean a “world-class engineer” in the original comment. I do appreciate the change of tone in the final (?) version, though :)
Building a scaled-down version is perfectly safe, provided you limit its compute (which happens by default) and don’t put it in the path of any important systems (which also happens by default).
There are a bunch of ways that could go wrong. The most obvious would be somebody else seeing what you were doing and scaling it up, followed by it hacking its way out and putting itself in whatever systems it pleased. But there are others, especially since if it does work, you are going to want to scale it up, which you may or may not be able to do while keeping it from going amok, and, only loosely correlated with reality, may or may not be able to convince yourself you can do.
And, depending on the architecture, you can’t necessarily predict the scale effects, so a scaled-down version may not tell you much.
If you can’t build the AGI yourself then you’re not at the top of the ML field which means your architecture is unlikely to be correct and you can publish it safely.
The only reason to publish it would be if OP thought it was correct, or at least that there was a meaningful chance. If you have an idea, but you’re convinced that the probability of it being correct is that low, then it seems to me that you should just drop it silently and not waste your and other people’s time on it.
Also, I’ll bet you 5 Postsingular Currency Units that pure machine learning, at least as presently defined by the people at the “top of the ML field”, will not generate a truly generally intelligent artificial agent, at or above human level, that can run on any real hardware. At least not before other AGIs, not entirely based on ML, have built Jupiter brains or something for it to run on. I think there’s gonna be other stuff in the mix.
Are you an engineer?
If so, then just build the AGI yourself. Experimentation trumps talk. Actually building the thing will teach you more about the architecture than discussing it with other people. Building a scaled-down version is perfectly safe, provided you limit its compute (which happens by default) and don’t put it in the path of any important systems (which also happens by default).
If you can’t build the AGI yourself then you’re not at the top of the ML field which means your architecture is unlikely to be correct and you can publish it safely.
I can’t say I am one, but I am currently working on research and prototyping and will probably refrain to that until I can prove some of my hypotheses, since I do have access to the tools I need at the moment.
Still, I didn’t want this post to only have relevance to my case, as I stated I don’t think probability of successs is meaningful. But I am interested in the opinions of the community related to other similar cases.
edit: It’s kinda hard to answer your comment since it keeps changing every time I refresh. By “can’t say I am one” I mean a “world-class engineer” in the original comment. I do appreciate the change of tone in the final (?) version, though :)
There are a bunch of ways that could go wrong. The most obvious would be somebody else seeing what you were doing and scaling it up, followed by it hacking its way out and putting itself in whatever systems it pleased. But there are others, especially since if it does work, you are going to want to scale it up, which you may or may not be able to do while keeping it from going amok, and, only loosely correlated with reality, may or may not be able to convince yourself you can do.
And, depending on the architecture, you can’t necessarily predict the scale effects, so a scaled-down version may not tell you much.
The only reason to publish it would be if OP thought it was correct, or at least that there was a meaningful chance. If you have an idea, but you’re convinced that the probability of it being correct is that low, then it seems to me that you should just drop it silently and not waste your and other people’s time on it.
Also, I’ll bet you 5 Postsingular Currency Units that pure machine learning, at least as presently defined by the people at the “top of the ML field”, will not generate a truly generally intelligent artificial agent, at or above human level, that can run on any real hardware. At least not before other AGIs, not entirely based on ML, have built Jupiter brains or something for it to run on. I think there’s gonna be other stuff in the mix.