Second-order logical version of Solomonoff induction.
Non-Cartesian version of Solomonoff induction.
This begs the question: do you even know what Solomonoff induction is? (edit: to be honest my best guess is that you don’t even know the terms with which to know the terms with which to know the terms… a couple dozen layers deep, with which to know what it is. The topic is pretty complicated, but looks pretty simple)
Construing utility functions
If you manage to construct an utility function (and by construct, i mean formally define in mathematics, construct from elementary operations) that actually defines the real world quantities for an agent to maximize (as opposed to finding maximums of functions in the abstract mathematical sense), that’ll be a step towards robot apocalypse and away from the currently safe approaches that simply won’t work like you guys think an utility maximizer would work and are subsequently safe (in the sense of not leading to the doom scenarios that you predict to arise from ‘utility maximization’). (I am pretty sure you won’t manage to construct it though, and even if you do nobody competent enough to implement this would be dumb enough to implement this)
This begs the question: do you even know what Solomonoff induction is? (edit: to be honest my best guess is that you don’t even know the terms with which to know the terms with which to know the terms… a couple dozen layers deep, with which to know what it is. The topic is pretty complicated, but looks pretty simple)
If you manage to construct an utility function (and by construct, i mean formally define in mathematics, construct from elementary operations) that actually defines the real world quantities for an agent to maximize (as opposed to finding maximums of functions in the abstract mathematical sense), that’ll be a step towards robot apocalypse and away from the currently safe approaches that simply won’t work like you guys think an utility maximizer would work and are subsequently safe (in the sense of not leading to the doom scenarios that you predict to arise from ‘utility maximization’). (I am pretty sure you won’t manage to construct it though, and even if you do nobody competent enough to implement this would be dumb enough to implement this)