I also see no explanation as to why knowledge of objective reality is of any value, even derivative; objective reality is there, and is what it is, regardless of whether it’s known or not.
You and I can influence the future course of objective reality, or at least that is what I want you to assume. Why should you assume it, you ask? For the same reason you should assume that reality has a compact algorithmic description (an assumption we might call Occam’s Razor): no one knows how to be rational without assuming it: in other words, it is an inductive bias necessary for effectiveness.
It is an open question which future courses are good and which are evil, but IMO neither the difficulty of the question nor the fact that no one so far has advanced a satisfactory answer for futures involving ultratechnologies and intelligence explosions—neither of those two facts—removes from you and I the obligation to search for an answer the best we can—or to contribute in some way to the search. This contribution can take many forms. For example, many contribute by holding down a job in which they make lunches for other people to eat or a job in which they care for other people’s elderly or disabled family members.
That last is the same as saying that you should seek power, but without saying what the power is for.
The power is for searching for a goal greater than ourselves and if the search succeeds, the power is for achieving the goal. The goal should follow from the fundamental principles of rationality and from correct knowledge of reality. I do not know what that goal is. I can only hope that someone will recognize the goal when they see it. I do not know what the goal is, but I can rule out paperclip maximization, and I am almost sure I can rule out saving each and every human life. That last goal is not IMO worthwhile enough for a power as large as the power that comes from an explosion of general intelligence. I believe that Eliezer should be free to apply his intelligence and his resources to a goal of his own choosing and that I have no valid moral claim on his resources, time or attention. My big worry is that even if my plans do not rely on his help or cooperation in any way, the intelligence explosion Eliezer plans to use to achieve his goal will prevent me from achieving my goal.
I like extended back-and-forth. Since extended back-and-forth is not common in blog comment sections, let me repeat my intention to continue to check back here. In fact, I will check back till further notice.
This comment section is now 74 hours old. Once a comment section has reached that age, I suggest that it is read mainly by people who have already read it and are checking back to look for replies to particular conversational threads.
I would ask the moderator to allow longer conversations and even longer individual comments once a comment section reaches a certain age.
Mitchell Porter, please consider the possibility that many if not most of the “preference-relevant human cognitive universals” you refer to are a hinderance rather than a help to agents who find themselves in an environment as different from the EEA as our environment is. It is my considered opinion that my main value to the universe derives from the ways my mind is different—differences which I believe I acquired by undergoing experiences that would have been extremely rare in the EEA. (Actually, they would have been depressingly common: what would have been extremely rare is for an individual to survive them.) So, it does not exactly ease my fear that the really powerful optimizing process will cancel my efforts to affect the far future for you to reply that CEV will factor out the “contigent idiosyncracies . . . of particular human beings”.
I also see no explanation as to why knowledge of objective reality is of any value, even derivative; objective reality is there, and is what it is, regardless of whether it’s known or not.
You and I can influence the future course of objective reality, or at least that is what I want you to assume. Why should you assume it, you ask? For the same reason you should assume that reality has a compact algorithmic description (an assumption we might call Occam’s Razor): no one knows how to be rational without assuming it: in other words, it is an inductive bias necessary for effectiveness.
It is an open question which future courses are good and which are evil, but IMO neither the difficulty of the question nor the fact that no one so far has advanced a satisfactory answer for futures involving ultratechnologies and intelligence explosions—neither of those two facts—removes from you and I the obligation to search for an answer the best we can—or to contribute in some way to the search. This contribution can take many forms. For example, many contribute by holding down a job in which they make lunches for other people to eat or a job in which they care for other people’s elderly or disabled family members.
That last is the same as saying that you should seek power, but without saying what the power is for.
The power is for searching for a goal greater than ourselves and if the search succeeds, the power is for achieving the goal. The goal should follow from the fundamental principles of rationality and from correct knowledge of reality. I do not know what that goal is. I can only hope that someone will recognize the goal when they see it. I do not know what the goal is, but I can rule out paperclip maximization, and I am almost sure I can rule out saving each and every human life. That last goal is not IMO worthwhile enough for a power as large as the power that comes from an explosion of general intelligence. I believe that Eliezer should be free to apply his intelligence and his resources to a goal of his own choosing and that I have no valid moral claim on his resources, time or attention. My big worry is that even if my plans do not rely on his help or cooperation in any way, the intelligence explosion Eliezer plans to use to achieve his goal will prevent me from achieving my goal.
I like extended back-and-forth. Since extended back-and-forth is not common in blog comment sections, let me repeat my intention to continue to check back here. In fact, I will check back till further notice.
This comment section is now 74 hours old. Once a comment section has reached that age, I suggest that it is read mainly by people who have already read it and are checking back to look for replies to particular conversational threads.
I would ask the moderator to allow longer conversations and even longer individual comments once a comment section reaches a certain age.
Mitchell Porter, please consider the possibility that many if not most of the “preference-relevant human cognitive universals” you refer to are a hinderance rather than a help to agents who find themselves in an environment as different from the EEA as our environment is. It is my considered opinion that my main value to the universe derives from the ways my mind is different—differences which I believe I acquired by undergoing experiences that would have been extremely rare in the EEA. (Actually, they would have been depressingly common: what would have been extremely rare is for an individual to survive them.) So, it does not exactly ease my fear that the really powerful optimizing process will cancel my efforts to affect the far future for you to reply that CEV will factor out the “contigent idiosyncracies . . . of particular human beings”.