That’s fine, except, you have NO EVIDENCE that AGI is hostile or is as capable as you claim or support for any of your claims. Yes, I also agree it’s possible, but there is no evidence yet that any of this stuff works.
EY has often argued this point elsewhere, but failed to do so this time , which is a pretty bad “comms” problem when addressing a general audience.
Yes but what valid argument exists? The possibility cloud is larger than anything he considers, and he has no evidence nature works exactly the way he claims. (note it may in fact work exactly that way)
You can add questions to stampy—if you click “I’m asking something else” it’ll show you 5 unanswered questions that sound similar, which you can then bump their priority. If none of them match, click on the “None of these: Request an answer to my exact question above” for it to be added to the queue
But these arguments essentially depend on going “If you program a computer with a few simple explicit laws, it will fail at complex ethical scenarios”.
But this is not how neural nets are trained. Instead, we train them on complex scenarios. This is how humans learn ethics, too.
EY has often argued this point elsewhere, but failed to do so this time , which is a pretty bad “comms” problem when addressing a general audience.
Yes but what valid argument exists? The possibility cloud is larger than anything he considers, and he has no evidence nature works exactly the way he claims. (note it may in fact work exactly that way)
I’m not strongly convinced by these claims either, but that’s another issue.
(about “hostile”)
https://ui.stampy.ai?state=6982_
https://ui.stampy.ai?state=897I_
And suddenly it seems stampy has no answer for “Why inner misalignment is the default outcome”. But EY said a lot about it, it’s easy to find.
I am well aware of these claims. They ignore other methods to construct AGI such as stateless open agency systems similar to what already exist.
You can add questions to stampy—if you click “I’m asking something else” it’ll show you 5 unanswered questions that sound similar, which you can then bump their priority. If none of them match, click on the “None of these: Request an answer to my exact question above” for it to be added to the queue
But these arguments essentially depend on going “If you program a computer with a few simple explicit laws, it will fail at complex ethical scenarios”.
But this is not how neural nets are trained. Instead, we train them on complex scenarios. This is how humans learn ethics, too.