See some of my criticisms of the concept: http://lesswrong.com/lw/cxk/thoughts_and_problems_with_eliezers_measure_of/
Thanks. That criticism makes sense to me. You put the point very concretely.
What do you think of the use of optimization power in arguments about takeoff speed and x-risk?
Or do you have a different research agenda altogether?
As an informal concept, it’s a good one (and better than “intelligence”). Just as long as its not taken too literally.
See some of my criticisms of the concept: http://lesswrong.com/lw/cxk/thoughts_and_problems_with_eliezers_measure_of/
Thanks. That criticism makes sense to me. You put the point very concretely.
What do you think of the use of optimization power in arguments about takeoff speed and x-risk?
Or do you have a different research agenda altogether?
As an informal concept, it’s a good one (and better than “intelligence”). Just as long as its not taken too literally.