You can add questions to stampy—if you click “I’m asking something else” it’ll show you 5 unanswered questions that sound similar, which you can then bump their priority. If none of them match, click on the “None of these: Request an answer to my exact question above” for it to be added to the queue
But these arguments essentially depend on going “If you program a computer with a few simple explicit laws, it will fail at complex ethical scenarios”.
But this is not how neural nets are trained. Instead, we train them on complex scenarios. This is how humans learn ethics, too.
(about “hostile”)
https://ui.stampy.ai?state=6982_
https://ui.stampy.ai?state=897I_
And suddenly it seems stampy has no answer for “Why inner misalignment is the default outcome”. But EY said a lot about it, it’s easy to find.
I am well aware of these claims. They ignore other methods to construct AGI such as stateless open agency systems similar to what already exist.
You can add questions to stampy—if you click “I’m asking something else” it’ll show you 5 unanswered questions that sound similar, which you can then bump their priority. If none of them match, click on the “None of these: Request an answer to my exact question above” for it to be added to the queue
But these arguments essentially depend on going “If you program a computer with a few simple explicit laws, it will fail at complex ethical scenarios”.
But this is not how neural nets are trained. Instead, we train them on complex scenarios. This is how humans learn ethics, too.