Well, I know all the possible problems and obstacles in development. I solved them in my calculations and also solve each problem individually. It seems reasonable to assume that it will work. I am not alone in this assessment. But only a real experiment would prove it. But since I do not bear the risk alone, all of you with me. I wanted to ask for your opinion.
Well, I know all the possible problems and obstacles in development.
I wouldn’t trust someone to do anything safety critical if they claim that they know all possible problems and obstacles. Unknown unknown problems are always part of the concern when doing something new.
If you actually do make a decision to run this, I recommend doing it on an airgapped computer and commit to if it actually manages to self-improve in any way show the thing to someone well-versed in AI safety before removing airgapping.
You’re absolutely right. I only claim to have solved the known manufacturing problems. I know how to build/code one. Not the unknown problems that are sure to come.
I didn’t mean security issues. That question is still open.
A development on a laptop or a separate system is not possible with the required computing capacity. And yes, I am talking to universities and specialists. It’s all in the works. But none of them can answer the moral question for me.
Folks, I’m not far from completion, only a few months. And then what? I wanted to think about a few things beforehand.
Because You can’t trap an AGI in a box. It will always find a way out. I see the code here in front of me. I see the potential. Believe me. I don’t believe it can’t be hold in a ‘Black Box’. The question is also should we?
Do you want to be born in a maximum security prison?
What opinion should the AI have of us when we put it in jail?
No, population is growing. Spending a few additional decades on AI safety research is likely improving our chances of survival. Of course listening to AI safety researchers and not just AI researchers from a random university matters as well.
I’ve read quite a bit about this area of research. I haven’t found a clear solution anywhere. There is only one point that everyone agrees on. With increasing intelligence, the possibilities of control is declining in the same way as the rise of the possibilities and the risk.
I haven’t found a clear solution anywhere. There is only one point that everyone agrees on.
Yes, according to current knowledge most AGI designs are dangerous. Speaking to researchers could help one of them to explain to you why your particular design is dangerous.
Well, I know all the possible problems and obstacles in development. I solved them in my calculations and also solve each problem individually. It seems reasonable to assume that it will work. I am not alone in this assessment. But only a real experiment would prove it. But since I do not bear the risk alone, all of you with me. I wanted to ask for your opinion.
I wouldn’t trust someone to do anything safety critical if they claim that they know all possible problems and obstacles. Unknown unknown problems are always part of the concern when doing something new.
If you actually do make a decision to run this, I recommend doing it on an airgapped computer and commit to if it actually manages to self-improve in any way show the thing to someone well-versed in AI safety before removing airgapping.
You’re absolutely right. I only claim to have solved the known manufacturing problems. I know how to build/code one. Not the unknown problems that are sure to come.
I didn’t mean security issues. That question is still open.
A development on a laptop or a separate system is not possible with the required computing capacity. And yes, I am talking to universities and specialists. It’s all in the works. But none of them can answer the moral question for me.
Folks, I’m not far from completion, only a few months. And then what? I wanted to think about a few things beforehand.
Because You can’t trap an AGI in a box. It will always find a way out. I see the code here in front of me. I see the potential. Believe me. I don’t believe it can’t be hold in a ‘Black Box’. The question is also should we?
Do you want to be born in a maximum security prison?
What opinion should the AI have of us when we put it in jail?
It seems like if your AGI actually works there’s a good chance that it kills humanity.
But isn’t humanity already killing itself? Maybe is an AI our last chance to survive?
No, population is growing. Spending a few additional decades on AI safety research is likely improving our chances of survival. Of course listening to AI safety researchers and not just AI researchers from a random university matters as well.
I’ve read quite a bit about this area of research. I haven’t found a clear solution anywhere. There is only one point that everyone agrees on. With increasing intelligence, the possibilities of control is declining in the same way as the rise of the possibilities and the risk.
Yes, according to current knowledge most AGI designs are dangerous. Speaking to researchers could help one of them to explain to you why your particular design is dangerous.