It was interesting to read the conversation between Eliezer and PJEby in the comments, as they are two very smart people whom I admire.
PJEby puts emphasis on the “growth mindset”. I agree with that. I just think that working for years to save the world, building a superintelligent friendly AI, raising the sanity waterline, and expanding one’s harem :) seems like a textbook example of a growth mindset. I guess even Steve Jobs didn’t have ambitions that high; he seemed to be satisfied with making a few shiny toys.
I suspect the difference is that for Eliezer, the “mind” he is expecting to grow is not limited to a human mind, but extends to other people, and ultimately the artificial intelligence. For a typical self-improvement fan, the goal is to expand their human mind to achieve all the great goals. For Eliezer, the proper way seems to expand the whole “Eliezer’s brain + rationalist community + Friendly AI” systems, until all the great goals are achieved. Because it all happens in a causally connected universe, and for a consequentialist the important outcome is that the things get done, no matter by who specifically does them. If it is great to do X, it is equally great to start a movement or to design a machine which does X, and one should rationally choose the best path. There is no need to emphasise the human brain part, except when it really is the best part to do the job.
Saying “I don’t believe I can fly by merely waving my hands” is not contradictory to a growth mindset, if the person has already started an airplane construction project.
(This said, I also think Eliezer understimates PJEby’s expertise.)
It was interesting to read the conversation between Eliezer and PJEby in the comments, as they are two very smart people whom I admire.
PJEby puts emphasis on the “growth mindset”. I agree with that. I just think that working for years to save the world, building a superintelligent friendly AI, raising the sanity waterline, and expanding one’s harem :) seems like a textbook example of a growth mindset. I guess even Steve Jobs didn’t have ambitions that high; he seemed to be satisfied with making a few shiny toys.
I suspect the difference is that for Eliezer, the “mind” he is expecting to grow is not limited to a human mind, but extends to other people, and ultimately the artificial intelligence. For a typical self-improvement fan, the goal is to expand their human mind to achieve all the great goals. For Eliezer, the proper way seems to expand the whole “Eliezer’s brain + rationalist community + Friendly AI” systems, until all the great goals are achieved. Because it all happens in a causally connected universe, and for a consequentialist the important outcome is that the things get done, no matter by who specifically does them. If it is great to do X, it is equally great to start a movement or to design a machine which does X, and one should rationally choose the best path. There is no need to emphasise the human brain part, except when it really is the best part to do the job.
Saying “I don’t believe I can fly by merely waving my hands” is not contradictory to a growth mindset, if the person has already started an airplane construction project.
(This said, I also think Eliezer understimates PJEby’s expertise.)