Starting this week, I’m moving to a “Consulting CTO” position with Oculus.
I will still have a voice in the development work, but it will only be consuming a modest slice of my time.
As for what I am going to be doing with the rest of my time: When I think back over everything I have done across games, aerospace, and VR, I have always felt that I had at least a vague “line of sight” to the solutions, even if they were unconventional or unproven. I have sometimes wondered how I would fare with a problem where the solution really isn’t in sight. I decided that I should give it a try before I get too old.
I’m going to work on artificial general intelligence (AGI).
I think it is possible, enormously valuable, and that I have a non-negligible chance of making a difference there, so by a Pascal’s Mugging sort of logic, I should be working on it.
For the time being at least, I am going to be going about it “Victorian Gentleman Scientist” style, pursuing my inquiries from home, and drafting my son into the work.
Runner up for next project was cost effective nuclear fission reactors, which wouldn’t have been as suitable for that style of work. 😊
My mind skipped over this the first time, but hey look! He’s using Eliezer’s term. Interesting. Kinda sad, given that the term describes something you should never do. Not that you shouldn’t work on AI, but you should work on AI because it is very likely to be a big deal, and good researchers have a large impact on how a field and engineering effort plays out. (I agree this domain is quite hard, but it’s not as impossibly hard as brute-forcing a random password with a hundred ASCII characters.)
I’d imagine he was reaching for a term for “generalised pascal-like situation”. Calling it a pascal’s wager wouldn’t work because pascal’s wager proper wasn’t a valid argument.
Hm I guess it is a bit sad that there isn’t a term for this.
Here it is:
Thanks. And very cool. Someone should send him the AI Alignment Forum sequences, in case he wants some interesting subproblems to think about.
My mind skipped over this the first time, but hey look! He’s using Eliezer’s term. Interesting. Kinda sad, given that the term describes something you should never do. Not that you shouldn’t work on AI, but you should work on AI because it is very likely to be a big deal, and good researchers have a large impact on how a field and engineering effort plays out. (I agree this domain is quite hard, but it’s not as impossibly hard as brute-forcing a random password with a hundred ASCII characters.)
I’d imagine he was reaching for a term for “generalised pascal-like situation”. Calling it a pascal’s wager wouldn’t work because pascal’s wager proper wasn’t a valid argument.
Hm I guess it is a bit sad that there isn’t a term for this.