This looks like important work. Like Gordon, upon closer examination, I do expect to find functions in your code that are tasked with carrying out computations that we don’t know how to do, or which may even be unfeasible in their present form—e.g. “map each brain to its rational self’s values”. Great concept, but how many future scientific breakthroughs will we need, before we’ll know how to do that?
Nonetheless, even a schema for friendly AI has great value. It’s certainly progress beyond 2006. :-)
If you or anyone else could point to a specific function in my code that we don’t know how to compute, I’d be very interested to hear that. The only place that I know of that is uncomputable is in calculating Kolmogorov complexity, but that could be replaced by some finite approximation. The rest should be computable, though its complexity may be super-duper exponentially exponential.
In the early stages, I would often find, as you expect, components that I thought would be fairly straightforward to define technically but would realize upon digging in that it was not so clear and required more philosophical progress. Over time, these lessened to more like just technical details than philosophical gaps, until I didn’t find even technical gaps.
Then I started writing automated tests and uncovered more bugs, though for the most part these were pretty minor, where I think a sympathetic programmer could probably work out what was meant to be done. I think around 42% of the procedures defined now have an automated test. Admittedly, these are generally the easier functions and simpler test cases. It turns out that writing code intended for an infinitely powerful computer doesn’t exactly lend itself to being tested on current machines. (Having a proper testing framework, however, with the ability to stub and mock objects might help considerably.)
There’s likely still many bugs in the untested parts but I would expect them to be fairly minor. Still, I’m only one person so I’d love to have more eyes on it. I also like the schema idea and have often thought of my work as a scaffold. Even if you disagree with one component, you might be able to just slot in a different philosophical theory. Perhaps you could even replace every component but still retain something of the flavor of my theory! I just hope it’s more like replacing Newtonian mechanics than phlogiston.
“If you or anyone else could point to a specific function in my code that we don’t know how to compute, I’d be very interested to hear that.”
From the comments in main():
“Given a set of brain models, associate them with the decision algorithms they implement.”
“Then map each brain to its rational self’s values (understood extensionally i.e. cashing out the meaning of their mental concepts in terms of the world events they refer to).”
Are you assuming that you have whole brain emulations of a few mature human beings? And then the “decision algorithms” and “rational… values” are defined in terms of how those emulations respond to various sequences of inputs?
Yeah, more or less. In the abstract, I “suppose that unlimited computation and a complete low-level causal model of the world and the adult human brains in it are available.” I’ve tended to imagine this as an oracle that just has a causal model of the actual world and the brains in it. But whole brain emulations would likely also suffice.
In the code, the causal models of the world and brains in it would be passed as parameters to the metaethical_ai_u function in main. The world w and each element of the set bs would be an instance of the causal_markov_model class.
Each brain gets associated with an instance of the decision_algorithm class by calling the class function implemented_by. A decision algorithm models the brain in higher level concepts like credences and preferences as opposed to bare causal states. And yeah, in determining both the decision algorithm implemented by a brain and its rational values, we look at their responses to all possible inputs.
This looks like important work. Like Gordon, upon closer examination, I do expect to find functions in your code that are tasked with carrying out computations that we don’t know how to do, or which may even be unfeasible in their present form—e.g. “map each brain to its rational self’s values”. Great concept, but how many future scientific breakthroughs will we need, before we’ll know how to do that?
Nonetheless, even a schema for friendly AI has great value. It’s certainly progress beyond 2006. :-)
If you or anyone else could point to a specific function in my code that we don’t know how to compute, I’d be very interested to hear that. The only place that I know of that is uncomputable is in calculating Kolmogorov complexity, but that could be replaced by some finite approximation. The rest should be computable, though its complexity may be super-duper exponentially exponential.
In the early stages, I would often find, as you expect, components that I thought would be fairly straightforward to define technically but would realize upon digging in that it was not so clear and required more philosophical progress. Over time, these lessened to more like just technical details than philosophical gaps, until I didn’t find even technical gaps.
Then I started writing automated tests and uncovered more bugs, though for the most part these were pretty minor, where I think a sympathetic programmer could probably work out what was meant to be done. I think around 42% of the procedures defined now have an automated test. Admittedly, these are generally the easier functions and simpler test cases. It turns out that writing code intended for an infinitely powerful computer doesn’t exactly lend itself to being tested on current machines. (Having a proper testing framework, however, with the ability to stub and mock objects might help considerably.)
There’s likely still many bugs in the untested parts but I would expect them to be fairly minor. Still, I’m only one person so I’d love to have more eyes on it. I also like the schema idea and have often thought of my work as a scaffold. Even if you disagree with one component, you might be able to just slot in a different philosophical theory. Perhaps you could even replace every component but still retain something of the flavor of my theory! I just hope it’s more like replacing Newtonian mechanics than phlogiston.
“If you or anyone else could point to a specific function in my code that we don’t know how to compute, I’d be very interested to hear that.”
From the comments in main():
“Given a set of brain models, associate them with the decision algorithms they implement.”
“Then map each brain to its rational self’s values (understood extensionally i.e. cashing out the meaning of their mental concepts in terms of the world events they refer to).”
Are you assuming that you have whole brain emulations of a few mature human beings? And then the “decision algorithms” and “rational… values” are defined in terms of how those emulations respond to various sequences of inputs?
Yeah, more or less. In the abstract, I “suppose that unlimited computation and a complete low-level causal model of the world and the adult human brains in it are available.” I’ve tended to imagine this as an oracle that just has a causal model of the actual world and the brains in it. But whole brain emulations would likely also suffice.
In the code, the causal models of the world and brains in it would be passed as parameters to the metaethical_ai_u function in main. The world w and each element of the set bs would be an instance of the causal_markov_model class.
Each brain gets associated with an instance of the decision_algorithm class by calling the class function implemented_by. A decision algorithm models the brain in higher level concepts like credences and preferences as opposed to bare causal states. And yeah, in determining both the decision algorithm implemented by a brain and its rational values, we look at their responses to all possible inputs.
For implementation, we aim for isomorphic, coherent, instrumentally rational and parsimonious explanations. For rational values, we aggregate the values of possible continuations weighting more heavily those that better satisfied the agent’s own higher-order decision criteria without introducing too much unrelated distortion of values.