I’ve argued against this plan but I’m guessing that Eliezer is probably still set on this course. Others at SIAI may have more reservations about it.
For the record: I, too, want an FAI team in which Eliezer isn’t the only one with Eliezer-level philosophical ability or better. This is tougher than “merely” finding 1-in-10-million math talents, but still do-able.
What am I doing about it? I wrote a post encouraging a specific kind of philosophical education that I think will be more likely to produce Eliezer-level philosophers than a “normal” philosophical education (or even a CMU or UPitts one). When Louie came up with the idea to write a list of Course recommendations for Friendliness researchers, I encouraged it. Also, one of the reasons I ended up supporting the plan to launch CFAR in 2012 was its potential not only to make people more effective at achieving their goals, but also to learn ways to make some people better philosophers (see my last paragraph here). And there’s more, but I can’t talk about it yet.
CFAR and related are good efforts at raising the sanity waterline (which is an average), not so much for identifying the extreme outliers that could Alan-Turing their way towards an FAI. Those will make waves on their own.
Such grassroots organisations may be good ways of capturing the attention of a wider audience, although second to publishing in the field / personally building a network at conferences.
The time horizon and viability of having a few hundred self-selected college aged students and trying to grow them into a seminal figure of extraordinary capabilities seems prohibitive, especially when there are already exceedingly capable people at Stanford et al, who already bring the oomph and just lack the FAI-motivation.
Can you name some older academics that have the requisite philosophical skill? (And if your first line isn’t a joke, perhaps you can link me to some of your own philosophical works?)
Sipser, Russell&Norvig et al are core parts of your proposed philosophical curriculum, Louis’ course recommendations reads like my former grad CS reading list.
It follows that, say, many with or pursuing a PhD in Machine Learning and related have also picked up a majority of your desired (per your recommendations) philosophical skills.
I’m not postulating that Bayesian superstars also make the best drummers and fencing masters, but between your analytical CS-style philosophy and Machine Learning groups there is a cross-domain synergy effect that comes with the clarity of designing minds—or advanced algorithms.
(As for myself, the first line was meant as a joke—alas! How sad!)
It follows that, say, many with or pursuing a PhD in Machine Learning and related have also picked up a majority of your desired (per your recommendations) philosophical skills
No, I wouldn’t say that. The problem is that we (humans) don’t know how to teach the philosophical skill I’m talking about, so there aren’t classes on it, so I can only recommend courses on “the basics” or “prerequisites.” I don’t know how to turn a math/CS PhD under Stuart Russell into the next Eliezer Yudkowsky.
For the record: I, too, want an FAI team in which Eliezer isn’t the only one with Eliezer-level philosophical ability or better. This is tougher than “merely” finding 1-in-10-million math talents, but still do-able.
What am I doing about it? I wrote a post encouraging a specific kind of philosophical education that I think will be more likely to produce Eliezer-level philosophers than a “normal” philosophical education (or even a CMU or UPitts one). When Louie came up with the idea to write a list of Course recommendations for Friendliness researchers, I encouraged it. Also, one of the reasons I ended up supporting the plan to launch CFAR in 2012 was its potential not only to make people more effective at achieving their goals, but also to learn ways to make some people better philosophers (see my last paragraph here). And there’s more, but I can’t talk about it yet.
Also, as Eliezer said, Paul Christiano’s existence is encouraging.
What about Kawoomba’s existence? :-(
CFAR and related are good efforts at raising the sanity waterline (which is an average), not so much for identifying the extreme outliers that could Alan-Turing their way towards an FAI. Those will make waves on their own.
Such grassroots organisations may be good ways of capturing the attention of a wider audience, although second to publishing in the field / personally building a network at conferences.
The time horizon and viability of having a few hundred self-selected college aged students and trying to grow them into a seminal figure of extraordinary capabilities seems prohibitive, especially when there are already exceedingly capable people at Stanford et al, who already bring the oomph and just lack the FAI-motivation.
Can you name some older academics that have the requisite philosophical skill? (And if your first line isn’t a joke, perhaps you can link me to some of your own philosophical works?)
Sipser, Russell&Norvig et al are core parts of your proposed philosophical curriculum, Louis’ course recommendations reads like my former grad CS reading list.
It follows that, say, many with or pursuing a PhD in Machine Learning and related have also picked up a majority of your desired (per your recommendations) philosophical skills.
I’m not postulating that Bayesian superstars also make the best drummers and fencing masters, but between your analytical CS-style philosophy and Machine Learning groups there is a cross-domain synergy effect that comes with the clarity of designing minds—or advanced algorithms.
(As for myself, the first line was meant as a joke—alas! How sad!)
No, I wouldn’t say that. The problem is that we (humans) don’t know how to teach the philosophical skill I’m talking about, so there aren’t classes on it, so I can only recommend courses on “the basics” or “prerequisites.” I don’t know how to turn a math/CS PhD under Stuart Russell into the next Eliezer Yudkowsky.