This is a great post and some great points are made in discussion too.
Is it possible to make exact models exhibiting some of these intuitive points? For example, there is a debate about whether extrapolated human values would depend strongly on cognitive content or whether they could be inferred just from cognitive architecture. (This could be a case of metamoral relativism, in which the answer simply depends on the method of extrapolation.) Can we come up with simple programs exhibiting this dichotomy, and simple constructive “methods of extrapolation” which can give us some sense of how value extrapolation is supposed to work?
Do we have any useful examples of reflectively improved decision theory? The draft document on TDT argues in some places that TDT is better than the alternatives, so it might be proposed as a case study—that is, one could examine the arguments in favor of TDT, and try to determine the normative principles which are at work.
This is a great post and some great points are made in discussion too.
Is it possible to make exact models exhibiting some of these intuitive points? For example, there is a debate about whether extrapolated human values would depend strongly on cognitive content or whether they could be inferred just from cognitive architecture. (This could be a case of metamoral relativism, in which the answer simply depends on the method of extrapolation.) Can we come up with simple programs exhibiting this dichotomy, and simple constructive “methods of extrapolation” which can give us some sense of how value extrapolation is supposed to work?
Do we have any useful examples of reflectively improved decision theory? The draft document on TDT argues in some places that TDT is better than the alternatives, so it might be proposed as a case study—that is, one could examine the arguments in favor of TDT, and try to determine the normative principles which are at work.