OCaml is my favorite language. At some point you should also learn Prolog and Haskell to have a well-rounded education.
lukstafi
Actually, the ratio alone is not sufficient, because there is a reward for two-boxing related to “verifying if Omega was right”—if Omega is right “apriori” then I see no point in two-boxing above 1:1. I think the poll would be more meaningful if 1 stood for $1. ETA: actually, “verifying” or “being playful” might mean for example tossing a coin to decide.
An interesting problem with CEV is demonstrated in chapter 5 “On the Rationality of Preferences” of Hilary Putnam “The Collapse of the Fact/Value Dichotomy and Other Essays”. The problem is that a person might assign value to that a choice of a preference, underdetermined at a given time, being of her own free will.
I agree with your premise, I should have talked about moral progress rather than CEV. ETA: one does not need a linear order for the notion of progress, there can be multiple “basins of attraction”. Some of the dynamics consists of decreasing inconsistencies and increasing robustness.
I agree. In case it’s not clear, my opinion is that an essential part of being a person is developing one’s value system. It’s not something that you can entirely outsource because “the journey is part of the destination” (but of course any help one can get matters) and it’s not a requirement for having ethical people or AI. ETA: i.e. having a fixed value system is not a requirement for being ethical.
The last forbidden transition would be the very last one, since it’s outright wrong while the previous ones do seem to have reasons behind them.
Valuing everything means you want to go as far from nothingness as you can get. You value that more types are instantiated over less types being instantiated.
Logically.
By letting people evolve their values at their own pace, within ethical boundaries.
I’m with you up to 6. Having a terminal value on everything does not mean that the final consistent evaluation is uniform over everything, because instrumental values come into play—some values cancel out and some add up. But it does mean that you have justifications to make before you start destroying stuff.
Is each participant limited to submitting a single program? Have you considered “team mode”, where the results of programs from a single team are summed up?
No, I mean that we might give a shit even about quite alien beings.
I presume by “the same world” you mean a sufficiently overlapping class of worlds. I don’t think that “the same world” is well defined. I think that determining in particular cases what is “the world” you want affects who you are.
My point is that the origin of values, the initial conditions, is not the sole criterion for determining whether a culture appreciates given values. There can be convergence or “discovery” of values.
Another point is that value (actually, a structure of values) shouldn’t be confused with a way of life. Values are abstractions: various notions of beauty, curiosity, elegance, so called warmheartedness… The exact meaning of any particular such term is not a metaphysical entity, so it is difficult to claim that an identical term is instantiated across different cultures / ways of life. But there can be very good translations that map such terms onto a different way of life (and back). ETA: there are multiple ways of life in our cultures; a person can change her way of life by pursuing a different profession or a different hobby.
I appeal to (1) the consideration of whether inter-translatability of science, and valuing of certain theories over others, depends on the initial conditions of civilization that develops it. (2) Universality of decision-theoretic and game-theoretic situations. (3) Evolutionary value of versatility hinting at evolved value of diversity.
? I have a different conception of romantic love. I could swear I’ve been in love with my kindergarten teacher. And I was “dating” girls two years later. It ended though as this part of myself grew introvert, still before puberty.
Do you think that CEV-generating mechanisms are negotiable across species? I.e. whether other species would have a concept of CEV and would agree to at least some of the mechanisms that generate a CEV. It would enable determining which differences are reconcilable and where we have to agree to disagree.
I mean learning Prolog in the way it would be taught in a “Programming Languages” course, not as an attempt at facilitating AI. Two angles are important here: (1) programming paradigm features: learning the concept of late-bound / dataflow / “logical” variables. http://en.wikipedia.org/wiki/Oz_(programming_language) is an OK substitute. (2) logic, which is also something to be taught in a “Programming Languages” context, not (only) in AI context. With Prolog, this means learning about SLD-resolution and perhaps making some broader forays from there. But one could also explore connections between functional programming and intuitionistic logics.