@Doug & Gray: AGI is a William Tell target. A near miss could be very unfortunate. We can’t responsibly take a proper shot till we have an appropriate level of understanding and confidence of accuracy.
This keeps on coming up, is there somewhere this is explained in detail? Also, have possible solutions been looked at such as constructing the AI in a controlled environment? If so why wouldn’t any of them work work?
@Doug & Gray: AGI is a William Tell target. A near miss could be very unfortunate. We can’t responsibly take a proper shot till we have an appropriate level of understanding and confidence of accuracy.
This keeps on coming up, is there somewhere this is explained in detail? Also, have possible solutions been looked at such as constructing the AI in a controlled environment? If so why wouldn’t any of them work work?
Thanks to whoever responds.
Try “The Two Faces of Tomorrow”, by James P. Hogan. Fictional evidence, to be sure, but well thought out fiction that demonstrates the problem well.