In fact, I’d actually be quite curious about which approaches you think are the most promising before deciding.
Thanks for asking! Like severalother LWers, I think that with the rapid advances in ANN-based AI, it looks like pretty clear sailing for artificial neural networks to become the first form of AGI. With the recent FLI grants there are now a number of teams working on machine learning of human values and making neural networks safer, but nobody so far is taking the long view of asking what happens when an ANN-based AGI becomes very powerful but doesn’t exactly share our values or philosophical views. It would be great if there was a team working on those longer term problems, like how to deal with inevitable differences in what values an AGI has learned and what our actual values are, and understanding metaphilosophy enough to be able to teach an ANN-based AGI how to “do philosophy”.
I can say, however, that I’m in contact with both Elon and Demis, and that I’m not currently worried about Elon disappearing into the mist :-)
That’s good. :) BTW, are you familiar with Demis’s views? From various news articles quoting him, he comes across as quite complacent but I wonder if that’s a mistaken impression or if he has different private views.
Thanks for asking! Like several other LWers, I think that with the rapid advances in ANN-based AI, it looks like pretty
clear sailing for artificial neural networks to become the first form of AGI.
Thanks for asking! Like several other LWers, I think that with the rapid advances in ANN-based AI, it looks like pretty clear sailing for artificial neural networks to become the first form of AGI. With the recent FLI grants there are now a number of teams working on machine learning of human values and making neural networks safer, but nobody so far is taking the long view of asking what happens when an ANN-based AGI becomes very powerful but doesn’t exactly share our values or philosophical views. It would be great if there was a team working on those longer term problems, like how to deal with inevitable differences in what values an AGI has learned and what our actual values are, and understanding metaphilosophy enough to be able to teach an ANN-based AGI how to “do philosophy”.
That’s good. :) BTW, are you familiar with Demis’s views? From various news articles quoting him, he comes across as quite complacent but I wonder if that’s a mistaken impression or if he has different private views.
I am getting the most uncanny sense of deja vu.