I imagine that a sufficiently high-resolution model of human cognition et cetera would factor into sets of individual equations to calculate variables of interest. Similar to how Newtonian models of planetary motion do.
However, I don’t see that the equations themselves on disk or in memory should pose a problem.
When we want to know particular predictions, we would have to instantiate these equations somehow—either by plugging in x=3 into F(x) or by evaluating a differential equation with x=3 as an initial condition. It would depend on the specifics of the person-model; however, if we calculated a sufficiently small subset of equations or refactored the equations into a sufficiently small set of new ones, we might be able to avoid the relevant moral dilemmas of calculating sentient things.
If on the other hand, for whatever we are interested in calculating, we couldn’t do the above, then what about separating the calculation into small, safe sequentially-calculated units? Safe units meaning that individually none of them model anything cognizant. At the end if we sewed the states of those units together into a final state, could this still pose moral issues? This gets into Greg Egan-esque territory.
It’s not clear that the previous two calculation strategies are always possible. However, another option might be to take care to always form questions so that the first strategy would be possible. For example, instead of asking whether a person will go left or right at a fork, maybe it’s enough to ask a specific question about some brain center.
And now that I’ve written all that, I realize that the whole point of the predicates is in how to determine “sufficiently few” in “sufficiently few equations” or what kind of units are “safe units”.
This isn’t a satisfactory answer, but it seems like determining “safe calculations” would be tied to understanding the necessary conditions under which human cognition arises etc.
Also, carrying it a step further, I would argue that we need not just person predicates, but predicates that can circumvent modeling any kind of morally wrong situation. I wouldn’t want to be accidentally burning kittens.
I imagine that a sufficiently high-resolution model of human cognition et cetera would factor into sets of individual equations to calculate variables of interest. Similar to how Newtonian models of planetary motion do.
However, I don’t see that the equations themselves on disk or in memory should pose a problem.
When we want to know particular predictions, we would have to instantiate these equations somehow—either by plugging in x=3 into F(x) or by evaluating a differential equation with x=3 as an initial condition. It would depend on the specifics of the person-model; however, if we calculated a sufficiently small subset of equations or refactored the equations into a sufficiently small set of new ones, we might be able to avoid the relevant moral dilemmas of calculating sentient things.
If on the other hand, for whatever we are interested in calculating, we couldn’t do the above, then what about separating the calculation into small, safe sequentially-calculated units? Safe units meaning that individually none of them model anything cognizant. At the end if we sewed the states of those units together into a final state, could this still pose moral issues? This gets into Greg Egan-esque territory.
It’s not clear that the previous two calculation strategies are always possible. However, another option might be to take care to always form questions so that the first strategy would be possible. For example, instead of asking whether a person will go left or right at a fork, maybe it’s enough to ask a specific question about some brain center.
And now that I’ve written all that, I realize that the whole point of the predicates is in how to determine “sufficiently few” in “sufficiently few equations” or what kind of units are “safe units”.
This isn’t a satisfactory answer, but it seems like determining “safe calculations” would be tied to understanding the necessary conditions under which human cognition arises etc.
Also, carrying it a step further, I would argue that we need not just person predicates, but predicates that can circumvent modeling any kind of morally wrong situation. I wouldn’t want to be accidentally burning kittens.